Years ago, people often talked about the advent of “grid computing” or “utility computing”, a world that would allow us to use computing power, much like plugging into an electrical outlet or turning on a faucet.
Somehow it hasn’t turned out quite like that, at least in the way that Cloud computing is most often adopted today with the IaaS (Infrastructure as a Service) model prevailing in most Cloud deployments. With IaaS, administrators and architects still have to specify too many of the technical details. Some elements, like cloud-based storage, do in fact embrace more of the pay-for-what-you-use model. However, the major cost elements of the Cloud, virtual machine compute power and database server capacity, are still primarily being served in very granular chunks, at specific VM sizes, regardless of how much capacity is actually consumed.
Our concept of utility computing had envisioned something more akin to consuming electrical power from a wall outlet: If you have a device that requires “X” kilowatts of power, you simply confirm that your cable and connectors are capable of carrying that load and then essentially just start consuming that amount of power knowing that you’ll get a bill for only the exact amount of power you consumed.
The corresponding analogy for the typical IaaS cloud provider model today might be: stating up front that I’m going to be using up to “X” KW of power so please dedicate a size-17 generator to me which I have to rent, regardless of how much power I really use. At least Microsoft Azure only charges for each actual minute used, whereas Amazon AWS charges a full hour for any fraction of an hour used.
The good news is that this cost model only happens when consuming cloud services in the traditional “way-we’ve-always-done-it” approach to IaaS. Once we start moving to Cloud services in PaaS form (Platform as a Service), we find that services truly can be purchased in a pay-as-you-use-it model, based on performance metrics alone rather than predefined and rigid spec classes.
Microsoft is doing a particularly good job in the database arena. With Amazon’s relational-database-as-a-service (“RDS”), it’s still necessary to spec out a VM instance size to support your capacity requirements, which makes it likely that a decent portion of the “horsepower” allocated will be under-utilized most of the time. But Microsoft Azure SQL Database now has something called “elastic database pools,” in which the defining variable is to specify that capacity is needed for “X” database transactions per second, and it becomes Microsoft’s problem to provide appropriate resources to meet that commitment. No more “guesstimating” VM core / RAM specifications; no more grossly oversizing VM’s just to make sure peak demands can be met. Instead, it’s all on the cloud provider to maintain peak performance at all times regardless of the resources committed behind the scenes. Even better, all those messy details you formerly had to cover yourself, like backups, disaster recovery and geo-redundancy, licensing, etc., are completely offloaded to the cloud provider.
Bottom line, we need to stop thinking of using cloud IT services as only traditional infrastructure-based solutions. If organizations approach them as just another set of VM’s in another data center, then that’s all we get and nothing more. Sure there are some efficiencies and some extra capabilities when using just IaaS, but to get ourselves out of the nuts-and-bolts arena of hardware-level server management (CPU cores, RAM, etc.), we need to start approaching things from a more PaaS and SaaS direction. We should be specifying WHAT we want, not HOW we want it.
I’m really excited by what Microsoft Azure is offering in the PaaS space, especially with SQL as a service. We’re finally getting to true pay-just-for-what-you-use utility computing. In IT, we’re always pushed by businesses to reduce costs, increase efficiency, and attain flawless availability. PaaS-based solutions put IT that much closer to fully realizing those goals.