|By Thorsten von Eicken||
|August 25, 2008 06:00 AM EDT||
Cloud infrastructure providers like Amazon are putting out the technology that the enterprise and SaaS providers need to move beyond testing the waters and take advantage of the Cloud today. The latest, and most important from the data storage perspective, is Amazon’s Elastic Block Store, or EBS.
Over the years we’ve witnessed a shift to hosted IT infrastructure where all the issues surrounding the physical plant are consolidated and managed by a specialist service. In the past six months we've witnessed the incredible rate at which cloud computing has really taken off and is now allowing businesses to shed the problems of ordering, racking and maintaining servers and disk storage systems.
The public cloud is now knocking down the barriers to a broader business audience that has seen the advantages of “pay as you go” IT and not having to build or rent another data center. Why do that when you can instantly spin up 10, or 1,000 virtual server instances at a fraction of the cost? Cloud infrastructure providers, like Amazon, are putting out the technology that the enterprise and SaaS providers need to move beyond testing the waters and take advantage of the cloud today. The latest, and most important from the data storage perspective, is Amazon’s Elastic Block Store, or EBS.
Datasets, Throughput and Snapshots
In short, EBS is a SAN (Storage Area Network) in the cloud that works with Amazon’s existing Elastic Compute Cloud (EC2) and Simple Storage Service (S3). One hurdle for many businesses has been the data storage and throughput limits for each instance. Now you can allocate a disk volume of 1GB to 1TB from what is a virtually endless SAN in the cloud, and attach it to an instance running in EC2. The volume is stored on redundant disks and has a lifetime that's separate from any instance on which it is mounted. This is important, as previously the data was lost when an instance was no longer used. Now you can unmount it, and later remount it on another instance. We’ll look at how to get very large datasets using EBS into the cloud.
Another benefit of EBS is taking advantage of the snapshotting feature. You can snapshot a volume to S3, where it is stored with the redundancy and durability of all objects on S3. Moreover, successive snapshots are incremental providing a very powerful and efficient backup capability for volumes.The ability to take snapshots is a complex feature, but RightScale provides some cool scripts to make it even easier to freeze all data access while the snapshot is taken to ensure that the data on the snapshot is consistent.
The RightScale Dashboard supports all the features of EBS and offers a number of additional features such as configuring volumes to automatically be attached to servers when these launch and track the ancestry of a volume or snapshot. What does EBS enable? In short: traditional processing on large datasets and reliable storage for many servers. But let's look at these two areas one-by-one.
Amazon Web Services are designed for scale. EC2, S3, SQS, and SDB are ideally suited for building large systems that process huge data volumes. The catch has been that they are geared towards modern service oriented systems using a non-relational database like Amazon SDB, and thrive on large numbers of simple servers (EC2). Business users have more traditional applications, such as relational databases, that require large datasets stored in a file system with a POSIX interface. While an EC2 X-large instance comes with about 1.4TB of local disk space, it is difficult to use in a production system. Populating the disk with data at boot time can take hours and backups, replication and restoring the data in case of an instance failure are all sore points. For up to 100GB the timescales are workable, but beyond that it gets difficult.
|Jeremy Geelan 08/21/08 02:03:47 PM EDT|
Dr von Eicken will be giving a technical session at SYS-CON's "Cloud Computing Expo" (November 19-21, 2008) - a major adjunct to the 4th International Virtualization Conference & Expo being held at The Fairmont Hotel in San Jose, CA - in which he will distill the unique characteristics of clouds and describe how to best think about deployments in the clouds.
Dec. 10, 2016 11:00 AM EST Reads: 689
Dec. 10, 2016 11:00 AM EST Reads: 1,006
Dec. 10, 2016 10:30 AM EST Reads: 905
Dec. 10, 2016 10:15 AM EST Reads: 630
Dec. 10, 2016 10:00 AM EST Reads: 1,081
Dec. 10, 2016 09:45 AM EST Reads: 631
Dec. 10, 2016 09:00 AM EST Reads: 1,783
Dec. 10, 2016 08:30 AM EST Reads: 1,509
Dec. 10, 2016 08:15 AM EST Reads: 1,262
Dec. 10, 2016 04:15 AM EST Reads: 1,394
Dec. 10, 2016 04:15 AM EST Reads: 590
Dec. 10, 2016 04:00 AM EST Reads: 5,322
Dec. 10, 2016 03:15 AM EST Reads: 515
Dec. 10, 2016 02:15 AM EST Reads: 811
Dec. 10, 2016 02:00 AM EST Reads: 671
Dec. 10, 2016 01:30 AM EST Reads: 816
Dec. 10, 2016 01:00 AM EST Reads: 1,285
Dec. 9, 2016 11:30 PM EST Reads: 1,169
Dec. 9, 2016 10:00 PM EST Reads: 366
"Once customers get a year into their IoT deployments, they start to realize that they may have been shortsighted in the ways they built out their deployment and the key thing I see a lot of people looking at is - how can I take equipment data, pull it back in an IoT solution and show it in a dashboard," stated Dave McCarthy, Director of Products at Bsquare Corporation, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 9, 2016 09:30 PM EST Reads: 1,299