On Tuesday, Amazon Web Services launched a new set of tiny “t2″ compute instances that provide a known base-level of compute power but can also “burst up” as needed should computing needs change. That bursting can be financed by “CPU credits” that users accumulate during less busy times, according to the AWS blog.
The new instances suit applications such as remote desktops — aka Amazon Workspaces; dev environments, low-traffic web sites and small databases — or any application marked by periods of low CPU use interrupted by high-usage spikes, according to Amazon.
The new “CPU credit” idea is roughly analogous to what AWS started offering in its new generic SSDs which let users aggregate burst potential, Matt Wood, general manager of data science for AWS said in an interview.
AWS also seems to be ripping a page out of the Google Cloud book by having these credits kick in automatically as needed. In March, when Google announced its sustained use discounts – which also kick in automatically when the workload hits a certain level — even dyed-in-the-wool AWS fans loved the idea because it eases their administrative burden. Even dyed-in-the-wool AWS fans said they still have to manage too much stuff manually in spreadsheets when it comes to cost tracking and associated tasks.
he itty bitty instances, as seen in the chart, are considerably cheaper than their bigger brethren — m3.medium instances costs $0.070 per hour on demand compared to $0.052 per hour for a t2.medium instance although m3 instances are backed by SSD storage while t2 instances are not.
Amazon is famous for rolling out new instance types (and lower prices) as it sees fit and did so even before when there were no other options. But now that there are other public clouds — Microsoft Azure, Google Cloud — in town, you’d be forgiven if you saw this as a reaction to newer (and cheaper) rivals. Digital Ocean, for example, offers SSD-backed compute instances (aka “Droplets” in Digital Ocean parlance) for $5.00 a month or $0.007 per hour.