|By Randy Bias||
|December 30, 2008 05:00 AM EST||
Randy Bias's Blog
There is a myth going around about virtualization and cloud computing. It’s expressed in a variety of ways, but the takeaway is always the same: “Public clouds are big virtual server clouds.” Sounds good, but untrue once you look under the covers. For good reason, since virtualization isn’t a panacea.
Here’s the deal. Public clouds (IaaS, PaaS, or SaaS) are all multi-tenant. It’s a fundamental definition. Multi-tenancy is one of the core properties of any cloud. Whether it’s a cloud like GoGrid's, EC2, Google App Engine (GAE), or Salesforce.com (SFDC). Multi-tenancy is a mechanism whereby public cloud providers give customers cost efficiencies by aggregating capex at scale and providing it as a subscription service.
Virtualization is just a multi-tenancy strategy.
Virtualization as Multi-Tenant Solution
That’s right. It’s only a multi-tenancy strategy. Not all clouds will use virtualization. Clouds like GAE and SFDC use completely different technologies to create multi-tenancy, but even for strict compute clouds, folks like AppNexus surface physical hardware that customers then carve up themselves into virtual machines. While others, like NewServers, serve up completely physical clouds. For those folks their multi-tenant strategy is more coarse, based simply on a single piece of physical hardware.
Scaling Up Still Matters
Simply put, for the foreseeable future there are many pieces of software that do better scaling ‘up’ versus ‘out’. For example, your traditional RDBMS is much easier to scale by throwing physical iron (instead of virtual) at the problem.
A well known Web 2.0 company recently expressed to me that they are running with hundreds of thousands of customers on big database servers with 128GB of RAM and lots of high speed disk spindles. This is one of the poster children of the Web 2.0 movement. If they can scale out their RDBMS by simply throwing iron at it, why would they re-architect into (for example) 10 extra large EC2 instances and deal with the engineering effort involved with a heavily sharded database?
To put this in perspective, you could do this:
- 10 extra large EC2 instances
- 16GB RAM each
- ~8 EBS network-based storage devices
- 2 cores each
- ~$6000/month including storage
- $ X to engineer for sharding at application level
- 2 redundant big iron physical servers
- 128GB RAM each
- 16 high-speed spindles on local disk
- 8-12 cores each
- $40,000 in capex or ~7,500/month for servers+storage
It’s kind of a no brainer. For certain use cases it’s more economical to scale using bigger hardware. There are two key reasons why this won’t change in the near future. The first is that many folks are working hard to make database software scale better across more cores. The second is that we’ll be at 16 and 32 cores per 1U server in the not so distant future. Scaling up will continue to be a viable option for the future. Period. Clouds need to enable this in the same way they enable virtualized servers for scaling out. It’s not an either/or proposition.
Update: The ‘well known’ Web 2.0 company I mentioned has informed me that my estimate on dedicated hardware was far too high. Something around $5,000 for those servers is more accurate, meaning there is even less reason to consider scale-out as an option.
Oct. 6, 2015 10:00 PM EDT Reads: 649
Oct. 6, 2015 08:00 PM EDT Reads: 311
Oct. 6, 2015 05:00 PM EDT Reads: 254
Oct. 6, 2015 02:00 PM EDT Reads: 228
Oct. 6, 2015 01:00 PM EDT Reads: 580
Oct. 6, 2015 01:00 PM EDT Reads: 739
Oct. 6, 2015 12:45 PM EDT Reads: 458
Oct. 6, 2015 12:30 PM EDT Reads: 585
Oct. 6, 2015 12:00 PM EDT Reads: 441
Oct. 6, 2015 10:45 AM EDT Reads: 452
Oct. 6, 2015 10:45 AM EDT Reads: 163
Oct. 6, 2015 10:00 AM EDT Reads: 734
Oct. 6, 2015 09:00 AM EDT Reads: 570
Oct. 6, 2015 09:00 AM EDT Reads: 141
Oct. 6, 2015 04:00 AM EDT Reads: 409
Oct. 5, 2015 08:00 AM EDT Reads: 378
Oct. 5, 2015 05:00 AM EDT Reads: 739
Oct. 4, 2015 12:00 PM EDT Reads: 628
Oct. 3, 2015 01:15 PM EDT Reads: 625
Oct. 3, 2015 11:00 AM EDT Reads: 418