|By Randy Bias||
|December 30, 2008 05:00 AM EST||
Randy Bias's Blog
There is a myth going around about virtualization and cloud computing. It’s expressed in a variety of ways, but the takeaway is always the same: “Public clouds are big virtual server clouds.” Sounds good, but untrue once you look under the covers. For good reason, since virtualization isn’t a panacea.
Here’s the deal. Public clouds (IaaS, PaaS, or SaaS) are all multi-tenant. It’s a fundamental definition. Multi-tenancy is one of the core properties of any cloud. Whether it’s a cloud like GoGrid's, EC2, Google App Engine (GAE), or Salesforce.com (SFDC). Multi-tenancy is a mechanism whereby public cloud providers give customers cost efficiencies by aggregating capex at scale and providing it as a subscription service.
Virtualization is just a multi-tenancy strategy.
Virtualization as Multi-Tenant Solution
That’s right. It’s only a multi-tenancy strategy. Not all clouds will use virtualization. Clouds like GAE and SFDC use completely different technologies to create multi-tenancy, but even for strict compute clouds, folks like AppNexus surface physical hardware that customers then carve up themselves into virtual machines. While others, like NewServers, serve up completely physical clouds. For those folks their multi-tenant strategy is more coarse, based simply on a single piece of physical hardware.
Scaling Up Still Matters
Simply put, for the foreseeable future there are many pieces of software that do better scaling ‘up’ versus ‘out’. For example, your traditional RDBMS is much easier to scale by throwing physical iron (instead of virtual) at the problem.
A well known Web 2.0 company recently expressed to me that they are running with hundreds of thousands of customers on big database servers with 128GB of RAM and lots of high speed disk spindles. This is one of the poster children of the Web 2.0 movement. If they can scale out their RDBMS by simply throwing iron at it, why would they re-architect into (for example) 10 extra large EC2 instances and deal with the engineering effort involved with a heavily sharded database?
To put this in perspective, you could do this:
- 10 extra large EC2 instances
- 16GB RAM each
- ~8 EBS network-based storage devices
- 2 cores each
- ~$6000/month including storage
- $ X to engineer for sharding at application level
- 2 redundant big iron physical servers
- 128GB RAM each
- 16 high-speed spindles on local disk
- 8-12 cores each
- $40,000 in capex or ~7,500/month for servers+storage
It’s kind of a no brainer. For certain use cases it’s more economical to scale using bigger hardware. There are two key reasons why this won’t change in the near future. The first is that many folks are working hard to make database software scale better across more cores. The second is that we’ll be at 16 and 32 cores per 1U server in the not so distant future. Scaling up will continue to be a viable option for the future. Period. Clouds need to enable this in the same way they enable virtualized servers for scaling out. It’s not an either/or proposition.
Update: The ‘well known’ Web 2.0 company I mentioned has informed me that my estimate on dedicated hardware was far too high. Something around $5,000 for those servers is more accurate, meaning there is even less reason to consider scale-out as an option.
Feb. 8, 2016 08:00 PM EST Reads: 129
Feb. 8, 2016 03:00 PM EST
Feb. 8, 2016 03:00 PM EST Reads: 573
Feb. 8, 2016 02:00 PM EST Reads: 375
Feb. 8, 2016 12:45 PM EST Reads: 356
Feb. 8, 2016 12:30 PM EST Reads: 142
Feb. 8, 2016 10:45 AM EST Reads: 383
Feb. 8, 2016 09:30 AM EST Reads: 157
Feb. 7, 2016 12:00 PM EST Reads: 354
Feb. 6, 2016 03:30 PM EST Reads: 739
Feb. 5, 2016 09:00 PM EST Reads: 796
Feb. 2, 2016 02:00 PM EST Reads: 417
Feb. 2, 2016 04:30 AM EST Reads: 860
Feb. 1, 2016 05:00 AM EST Reads: 951
Jan. 31, 2016 09:00 PM EST Reads: 737
Jan. 31, 2016 07:15 PM EST Reads: 1,157
Jan. 31, 2016 10:00 AM EST Reads: 1,231
Jan. 31, 2016 10:00 AM EST Reads: 821
Jan. 30, 2016 07:45 PM EST Reads: 798
Jan. 30, 2016 03:45 PM EST Reads: 1,282