Welcome!

Web 2.0 Authors: Jason Bloomberg, Elizabeth White, Yeshim Deniz, Roger Strukhoff, Pat Romanski

Related Topics: Cloud Expo, SOA & WOA, Virtualization, Web 2.0, Big Data Journal, SDN Journal, OpenStack Journal

Cloud Expo: Article

Scalability – The New Buzzword for Cloud Computing

An exclusive Q&A with Barbara P. Aichinger, co-founder of FuturePlus Systems and VP of New Business Development

"I think SMAC will continue to grow and mature and the demand for more sophisticated data like streaming videos and online television will be a real game changer," stated Barbara P. Aichinger, co-founder of FuturePlus Systems and VP of New Business Development, in this exclusive Q&A with Cloud Expo Conference Chair. "Analytics is also very important and will become part of "Big Data," meaning the data pulled and pushed to and from the cloud will have an analytic associated with it."

Cloud Computing Journal: The move to cloud isn't about saving money, it is about saving time - agree or disagree?

Barbara P. Aichinger: It's about saving time and money but it can be scary. As someone who really understands how the hardware works and, more important, how it doesn't work I can understand the apprehension. The cloud needs standards especially in the quality and reliability area so that folks know their data is safe.

Cloud Computing Journal: How should organizations tackle their regulatory and compliance concerns in the cloud? Who should they be asking/trusting for advice?

Aichinger: I see two pieces here - first is the actual hardware, networking and the physical building that the machines are in. The second is all the layered products and software. My company is on the hardware side so I can speak to that. We make validation tools used by the designers of cloud servers. We see cost pressures causing hardware vendors to take short cuts on system validation. This can then show up in the data center as "ghost errors," i.e., memory errors that just happen once in a while but over time cause system outages.

The industry does not have standards for memory subsystem compliance. What has happened is that the industry has created tools to test the memory subsystem for compliance to the JEDEC memory specification. This is different from an actual compliance program. It's basically voluntary. If you are a good vendor, you use the right equipment to make sure your servers are compliant. The problem is many vendors do not, and since the end user has no idea about memory standards, they never ask what type of compliance testing is being done. IT managers should insist on seeing the validation reports for the servers they buy. System integrators who package up various hardware pieces and sell them as a complete server should also take a good hard look at what memory subsystem compliance testing has been done.

Cloud Computing Journal: What does the emergence of open source clouds mean for the cloud ecosystem? How does the existence of OpenStack, CloudStack, OpenNebula, Eucalyptus and so on affect your own company?

Aichinger: I think open source is great especially for small companies trying to put something together for initial product releases. We use open source Ubuntu and the Google Stress App Test to exercise cloud hardware so that our tool can see if the memory subsystem is violating the JEDEC rules. In eight out of 10 systems we look at we find violations. These don't cause errors right away but over months and years they are statistically very likely to cause system crashes and cloud outages. OpenStack, CloudStack, OpenNebula, and Eucalyptus are all great additions to the cloud ecosystem. Our role at FuturePlus Systems is to make sure the hardware stays up by validating the design so these products can add value to the users of the cloud.

Cloud Computing Journal: With SMBs, the two primary challenges they face moving to the cloud are always stated as being cost and trust: where is the industry on satisfying SMBs on both points simultaneously - further along than in 2011-12, or...?

Aichinger: I think consumers don't have too much problem with moving to the cloud but for small businesses it can be a challenge. For a core engineering company like FuturePlus Systems the issue is more trust than cost. We know the ins and outs of the cloud hardware and the network so we take a good hard look at what data we keep in the cloud. Having said that we are excited to see our tools being used more and more by system integrators and cloud IT managers. We are teaching them how to make sure the systems they deploy in the cloud are quality and that the memory subsystems are compliant to the standards. As the cloud becomes "healthier" and more secure more SMBs will be more comfortable moving to the cloud.

Cloud Computing Journal: 2013 seems to be turning into a breakthrough year for Big Data. How much does the success of cloud computing have to do with that?

Aichinger: With all the sensors and mobile devices on the market Big Data is inevitable and that pushes the need for expansion in the cloud. Scalability is the new buzzword for cloud computing. I have read some papers that say Google already has one million servers deployed. Facebook is heading there quickly right along with others.

What many don't stop and think about is how do failures scale? In 2009 a landmark study looked at the failure rate of the memory DIMMs in the Google fleet of servers. The data suggested that failures were far more often than the vendors' specifications indicated they should be. Scaling those errors out to the one million servers Google has and the reported uncorrectable error rate of 1.3% to 4% would have servers going down 13,000 to 40,000 times a year. Boil that down and you have two to five failures every hour somewhere in the Google fleet. These failures are expensive. The system has to be taken offline and repaired or replaced. Energy costs, labor and parts can quickly add up.

Cloud Computing Journal: What about the role of social: aside from the acronym itself SMAC (for Social, Mobile, Analytics, Cloud) are you seeing and/or anticipating major traction in this area?

Aichinger: I think SMAC will continue to grow and mature. I think the demand for more sophisticated data like streaming videos and online television will be a real game changer. Analytics is also very important and will become part of "Big Data," meaning the data pulled and pushed to and from the cloud will have an analytic associated with it. FuturePlus Systems has been dealing with Big Data for decades as we capture every signal on every clock edge in our tools that validate the cloud hardware. With cloud hardware going faster and becoming greener we have lots more data points to add to our own "Big Data" problem. I also think SMAC will drive better visualization techniques so that humans will be better able to digest all of the analytics associated with it.

Cloud Computing Journal: To finish, just as real estate is always said to be about "location, location, location", what one word, repeated three times, would you say Cloud Computing is all about?

Aichinger: Standards, Standards, Standards. When I meet up with engineers and managers who actually have to deploy the cloud hardware or that provide cloud hardware, they seem to be constantly exhausted. Pricing pressures are causing them to look hard at the nuances between the various server platforms, disk vendors and DIMM DRAM memory vendors. How do they know if it will work reliably? How do they know what the performance is? Will it be hard to maintain?

Most people never think about the actual machines that run in the data centers. These data centers are the life blood of the cloud. If they don't work well I don't care how good or open your software is... it's not going to do anything. Good solid standards that address the functionality of the hardware, the reliability of the disks, the JEDEC compliance of the memory subsystem would go a long way to advancing cloud computing.

This is where FuturePlus Systems comes in. We have been providing validation tools for both hardware and software developers for more than 20 years. We are moving into the data center to help customers evaluate the servers they are purchasing and allowing software developers to see what performance loads they are putting on the system. This year will be our first year exhibiting at the Cloud Expo in New York so we hope folks will come by our kiosk in the Big Data Pavilion and take a look at our DDR3 Detective tool.

More Stories By Liz McMillan

News Desk compiles and publishes breaking news stories, press releases and latest news articles as they happen.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.