Welcome!

Web 2.0 Authors: Liz McMillan, Pat Romanski, Elizabeth White, Srinivasan Sundara Rajan, Xenia von Wedel

Related Topics: Cloud Expo, Java, SOA & WOA, Open Source, Virtualization, Big Data Journal

Cloud Expo: Article

Nimble Storage Leverages Big Data & Cloud

High-performing, cost-effective Big-Data processing helps to make the best use of dynamic storage resources

If, as the adage goes, you should fight fire with fire then perhaps its equally justified to fight Big Data optimization requirements with -- Big Data.

It turns out that high-performing, cost-effective Big-Data processing helps to make the best use of dynamic storage resources by taking in all the relevant storage activities data, analyzing it and then making the best real-time choices for dynamic hybrid storage optimization.

In other words, Big Data can be exploited to better manage complex data and storage. The concept, while tricky at first, is powerful and, I believe, a harbinger of what we're going to see more of, which is to bring high intelligence to bear on many more services, products and machines.

To explore how such Big Data analysis makes good on data storage efficiency, BriefingsDirect recently sat down with optimized hybrid storage provider Nimble Storage to hear their story on the use of HP Vertica as their data analysis platform of choice. Yes, it's the same Nimble that last month had a highly successful IPO. The expert is Larry Lancaster, Chief Data Scientist at Nimble Storage Inc. in San Jose, California. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: How do you use big data to support your hybrid storage optimization value?

Lancaster: At a high level, Nimble Storage recognized early, near the inception of the product, that if we were able to collect enough operational data about how our products are performing in the field, get it back home and analyze it, we'd be able to dramatically reduce support costs. Also, we can create a feedback loop that allows engineering to improve the product very quickly, according to the demands that are being placed on the product in the field.

Lancaster

Looking at it from that perspective, to get it right, you need to do it from the inception of the product. If you take a look at how much data we get back for every array we sell in the field, we could be receiving anywhere from 10,000 to 100,000 data points per minute from each array. Then, we bring those back home, we put them into a database, and we run a lot of intensive analytics on those data.

Once you're doing that, you realize that as soon as you do something, you have this data you're starting to leverage. You're making support recommendations and so on, but then you realize you could do a lot more with it. We can do dynamic cache sizing. We can figure out how much cache a customer needs based on an analysis of their real workloads.

We found that big data is really paying off for us. We want to continue to increase how much it's paying off for us, but to do that we need to be able to do bigger queries faster. We have a team of data scientists and we don't want them sitting here twiddling their thumbs. That’s what brought us to Vertica at Nimble.

Using Big Data

Gardner: It's an interesting juxtaposition that you're using big data in order to better manage data and storage. What better use of it? And what sort of efficiencies are we talking about here, when you are able to get that data in that massive scale and do these analytics and then go back out into the field and adjust? What does that get for you?

Lancaster: We have a very tight feedback loop. In one release we put out, we may make some changes in the way certain things happen on the back end, for example, the way NVRAM is drained. There are some very particular details around that, and we can observe very quickly how that performs under different workloads. We can make tweaks and do a lot of tuning.

Without the kind of data we have, we might have to have multiple cases being opened on performance in the field and escalations, looking at cores, and then simulating things in the lab.

It's a very labor-intensive, slow process with very little data to base the decision on. When you bring home operational data from all your products in the field, you're now talking about being able to figure out in near real-time the distribution of workloads in the field and how people access their storage. I think we have a better understanding of the way storage works in the real world than any other storage vendor, simply because we have the data.

Gardner: So it's an interesting combination of a product lifecycle approach to getting data -- but also combining a service with a product in such a way that you're adjusting in real time.

Lancaster: That’s right. We do a lot of neat things. We do capacity forecasting. We do a lot of predictive analytics to try to figure out when the storage administrator is going to need to purchase something, rather than having them just stumble into the fact that they need to provision for equipment because they've run out of space.

That’s the kind of efficiency we gain that you can see, and the InfoSight service delivers that to our customers.

A lot of things that should have been done in storage from the very beginning that sound straightforward were simply never done. We're the first company to take a comprehensive approach to it. We open and close 80 percent of our cases automatically, 90 percent of them are automatically opened.

We have a suite of tools that run on this operational data, so we don't have to call people up and say, "Please gather this data for us. Please send us these log posts. Please send us these statistics." Now, we take a case that could have taken two or three days and we turn it into something that can be done in an hour.

That’s the kind of efficiency we gain that you can see, and the InfoSight service delivers that to our customers.

Gardner: Larry, just to be clear, you're supporting both flash and traditional disk storage, but you're able to exploit the hybrid relationship between them because of this data and analysis. Tell us a little bit about how the hybrid storage works.

Challenge for hard drives

Lancaster: At a high level, you have hard drives, which are inexpensive, but they're slow for random I/O. For sequential I/O, they are all right, but for random I/O performance, they're slow. It takes time to move the platter and the head. You're looking at 5 to 10 milliseconds seek time for random read.

That's been the challenge for hard drives. Flash drives have come out and they can dramatically improve on that. Now, you're talking about microsecond-order latencies, rather than milliseconds.

But the challenge there is that they're expensive. You could go buy all flash or you could go buy all hard drives and you can live with those downsides of each. Or, you can take the best of both worlds.

Then, there's a challenge. How do I keep the data that I need to access randomly in flash, but keep the rest of the data that I don't care so much about in a frequent random-read performance, keep that on the hard drives only, and in that way, optimize my use of flash. That's the way you can save money, but it's difficult to do that.

It comes down to having some understanding of the workloads that the customer is running and being able to anticipate the best algorithms and parameters for those algorithms to make sure that the right data is in flash.

It would be hard to be the best hybrid storage solution without the kind of analytics that we're doing.

We've built up an enormous dataset covering thousands of system-years of real-world usage to tell us exactly which approaches to caching are going to deliver the most benefit. It would be hard to be the best hybrid storage solution without the kind of analytics that we're doing.

Gardner: Then, to extrapolate a little bit higher, or maybe wider, for how this benefits an organization, the analysis that you're gathering also pertains to the data lifecycle, things like disaster recovery (DR), business continuity, backups, scheduling, and so forth. Tell us how the data gathering analytics has been applied to that larger data lifecycle equation.

Lancaster: You're absolutely right. One of the things that we do is make sure that we audit all of the storage that our customers have deployed to understand how much of it is protected with local snapshots, how much of it is replicated for disaster recovery,  and how much incremental space is required to increase retention time and so on.

We have very efficient snapshots, but at the end of the day, if you're making changes, snapshots still do take some amount of space. So, learning exactly what is that overhead, and how can we help you achieve your disaster recovery goals.

We have a good understanding of that in the field. We go to customers with proactive service recommendations about what they could and should do. But we also take into account the fact that they may be doing DR when we forecast how much capacity they are going to need.

Larger lifecycle

It is part of a larger lifecycle that we address, but at the end of the day, for my team it's still all about analytics. It's about looking to the data as the source of truth and as the source of recommendation.

We can tell you roughly how much space you're going to need to do disaster recovery on a given type of application, because we can look in our field and see the distribution of the extra space that would take and what kind of bandwidth you're going to need. We have all that information at our fingertips.

When you start to work this way, you realize that you can do things you couldn't do before. And the things you could do before, you can do orders of magnitude better. So we're a great case of actually applying data science to the product lifecycle, but also to front-line revenue and cost enhancement.

Gardner: How can you actually get that analysis in the speed, at the scale, and at the cost that you require?

I have to tell you, I fell in love with Vertica because of the performance benefits that it provided.

Lancaster: To give you a brief history of my awareness of HP Vertica and my involvement around the product, I don’t remember the exact year, but it may have been eight years ago roughly. At some point, there was an announcement that Mike Stonebraker was involved in a group that was going to productize the C-Store Database, which was sort of an academic experiment at UC Berkeley, to understand the benefits and capabilities of real column store.

[Learn more about column store architectures and how they benefit data speed and management for Infinity Insurance.]

I was immediately interested and contacted them. I was working at another storage company at the time. I had a 20 terabyte (TB) data warehouse, which at the time was one of the largest Oracle on Linux data warehouses in the world.

They didn't want to touch that opportunity just yet, because they were just starting out in alpha mode. I hooked up with them again a few years later, when I was CTO at a company called Glassbeam, where we developed what's substantially an extract, transform, and load (ETL) platform.

By then, they were well along the road. They had a great product and it was solid. So we tried it out, and I have to tell you, I fell in love with Vertica because of the performance benefits that it provided.

When you start thinking about collecting as many different data points as we like to collect, you have to recognize that you’re going to end up with a couple choices on a row store. Either you're going to have very narrow tables and a lot of them or else you're going to be wasting a lot of I/O overhead, retrieving entire rows where you just need a couple fields.

Greater efficiency

That was what piqued my interest at first. But as I began to use it more and more at Glassbeam, I realized that the performance benefits you could gain by using HP Vertica properly were another order of magnitude beyond what you would expect just with the column-store efficiency.

That's because of certain features that Vertica allows, such as something called pre-join projections. We can drill into that sort of stuff more if you like, but, at a high-level, it lets you maintain the normalized logical integrity of your schema, while having under the hood, an optimized denormalized query performance physically on disk.

Now you might ask you can be efficient if you have a denormalized structure on disk. It's because Vertica allows you to do some very efficient types of encoding on your data. So all of the low cardinality columns that would have been wasting space in a row store end up taking almost no space at all.

What you find, at least it's been my impression, is that Vertica is the data warehouse that you would have wanted to have built 10 or 20 years ago, but nobody had done it yet.

Vertica is the data warehouse that you would have wanted to have built 10 or 20 years ago, but nobody had done it yet.

Nowadays, when I'm evaluating other big data platforms, I always have to look at it from the perspective of it's great, we can get some parallelism here, and there are certain operations that we can do that might be difficult on other platforms, but I always have to compare it to Vertica. Frankly, I always find that Vertica comes out on top in terms of features, performance, and usability.

Gardner: When you arrived there at Nimble Storage, what were they using, and where are you now on your journey into a transition to Vertica?

Lancaster: I built the environment here from the ground up. When I got here, there were roughly 30 people. It's a very small company. We started with Postgres. We started with something free. We didn’t want to have a large budget dedicated to the backing infrastructure just yet. We weren’t ready to monetize it yet.

So, we started on Postgres and we've scaled up now to the point where we have about 100 TBs on Postgres. We get decent performance out of the database for the things that we absolutely need to do, which are micro-batch updates and transactional activity. We get that performance because the database lives on Nimble Storage.

I don't know what the largest unsharded Postgres instance is in the world, but I feel like I have one of them. It's a challenge to manage and leverage. Now, we've gotten to the point where we're really enjoying doing larger queries. We really want to understand the entire installed base of how we want to do analyses that extend across the entire base.

Rich information

We want to understand the lifecycle of a volume. We want to understand how it grows, how it lives, what its performance characteristics are, and then how gradually it falls into senescence when people stop using it. It turns out there is a lot of really rich information that we now have access to to understand storage lifecycles in a way I don't think was possible before.

But to do that, we need to take our infrastructure to the next level. So we've been doing that and we've loaded a large number of our sensor data that’s the numerical data I have talked about into Vertica, started to compare the queries, and then started to use Vertica more and more for all the analysis we're doing.

Internally, we're using Vertica, just because of the performance benefits. I can give you an example. We had a particular query, a particularly large query. It was to look at certain aspects of latency over a month across the entire installed base to understand a little bit about the distribution, depending on different factors, and so on.

I'm really excited. We're getting exactly what we wanted and better.

We ran that query in Postgres, and depending on how busy the server was, it took  anywhere from 12 to 24 hours to run. On Vertica, to run the same query on the same data takes anywhere from three to seven seconds.

I anticipated that because we were aware upfront of the benefits we'd be getting. I've seen it before. We knew how to structure our projections to get that kind of performance. We knew what kind of infrastructure we'd need under it. I'm really excited. We're getting exactly what we wanted and better.

This is only a three node cluster. Look at the performance we're getting. On the smaller queries, we're getting sub-second latencies. On the big ones, we're getting sub-10 second latencies. It's absolutely amazing. It's game changing.

People can sit at their desktops now, manipulate data, come up with new ideas and iterate without having to run a batch and go home. It's a dramatic productivity increase. Data scientists tend to be fairly impatient. They're highly paid people, and you don’t want them sitting at their desk waiting to get an answer out of the database. It's not the best use of their time.

Gardner: Larry, is there another aspect to the HP Vertica value when it comes to the cloud model for deployment? It seems to me that if Nimble Storage continues to grow rapidly and scales that, bringing all that data back to a central single point might be problematic. Having it distributed or in different cloud deployment models might make sense. Is there something about the way Vertica works within a cloud services deployment that is of interest to you as well?

No worries

Lancaster: There's the ease of adding nodes without downtime, the fact that you can create a K-safe cluster. If my cluster is 16 nodes wide now, and I want two nodes redundancy, it's very similar to RAID. You can specify that, and the database will take care of that for you. You don’t have to worry about the database going down and losing data as a result of the node failure every time or two.

I love the fact that you don’t have to pay extra for that. If I want to put more cores or  nodes on it or I want to put more redundancy into my design, I can do that without paying more for it. Wow! That’s kind of revolutionary in itself.

It's great to see a database company incented to give you great performance. They're incented to help you work better with more nodes and more cores. They don't have to worry about people not being able to pay the additional license fees to deploy more resources. In that sense, it's great.

We have our own private cloud -- that’s how I like to think of it -- at an offsite colocation facility. We do DR through Nimble Storage. At the same time, we have a K-safe cluster. We had a hardware glitch on one of the nodes last week, and the other two nodes stayed up, served data, and everything was fine.

If you do your job right as a cloud provider, people just want more and more and more.

Those kinds of features are critical, and that ability to be flexible and expand is critical for someone who is trying to build a large cloud infrastructure, because you're never going to know in advance exactly how much you're going to need.

If you do your job right as a cloud provider, people just want more and more and more. You want to get them hooked and you want to get them enjoying the experience. Vertica lets you do that.

You may also be interested in:

More Stories By Dana Gardner

At Interarbor Solutions, we create the analysis and in-depth podcasts on enterprise software and cloud trends that help fuel the social media revolution. As a veteran IT analyst, Dana Gardner moderates discussions and interviews get to the meat of the hottest technology topics. We define and forecast the business productivity effects of enterprise infrastructure, SOA and cloud advances. Our social media vehicles become conversational platforms, powerfully distributed via the BriefingsDirect Network of online media partners like ZDNet and IT-Director.com. As founder and principal analyst at Interarbor Solutions, Dana Gardner created BriefingsDirect to give online readers and listeners in-depth and direct access to the brightest thought leaders on IT. Our twice-monthly BriefingsDirect Analyst Insights Edition podcasts examine the latest IT news with a panel of analysts and guests. Our sponsored discussions provide a unique, deep-dive focus on specific industry problems and the latest solutions. This podcast equivalent of an analyst briefing session -- made available as a podcast/transcript/blog to any interested viewer and search engine seeker -- breaks the mold on closed knowledge. These informational podcasts jump-start conversational evangelism, drive traffic to lead generation campaigns, and produce strong SEO returns. Interarbor Solutions provides fresh and creative thinking on IT, SOA, cloud and social media strategies based on the power of thoughtful content, made freely and easily available to proactive seekers of insights and information. As a result, marketers and branding professionals can communicate inexpensively with self-qualifiying readers/listeners in discreet market segments. BriefingsDirect podcasts hosted by Dana Gardner: Full turnkey planning, moderatiing, producing, hosting, and distribution via blogs and IT media partners of essential IT knowledge and understanding.

@ThingsExpo Stories
“In the past year we've seen a lot of stabilization of WebRTC. You can now use it in production with a far greater degree of certainty. A lot of the real developments in the past year have been in things like the data channel, which will enable a whole new type of application," explained Peter Dunkley, Technical Director at Acision, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that IDenticard will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. IDenticard™ is the security division of Brady Corp (NYSE: BRC), a $1.5 billion manufacturer of identification products. We have small-company values with the strength and stability of a major corporation. IDenticard offers local sales, support and service to our customers across the United States and Canada. Our partner network encompasses some 300 of the world's leading systems integrators and security s...
The BPM world is going through some evolution or changes where traditional business process management solutions really have nowhere to go in terms of development of the road map. In this demo at 15th Cloud Expo, Kyle Hansen, Director of Professional Services at AgilePoint, shows AgilePoint’s unique approach to dealing with this market circumstance by developing a rapid application composition or development framework.
The major cloud platforms defy a simple, side-by-side analysis. Each of the major IaaS public-cloud platforms offers their own unique strengths and functionality. Options for on-site private cloud are diverse as well, and must be designed and deployed while taking existing legacy architecture and infrastructure into account. Then the reality is that most enterprises are embarking on a hybrid cloud strategy and programs. In this Power Panel at 15th Cloud Expo (http://www.CloudComputingExpo.com), moderated by Ashar Baig, Research Director, Cloud, at Gigaom Research, Nate Gordon, Director of T...
"BSQUARE is in the business of selling software solutions for smart connected devices. It's obvious that IoT has moved from being a technology to being a fundamental part of business, and in the last 18 months people have said let's figure out how to do it and let's put some focus on it, " explained Dave Wagstaff, VP & Chief Architect, at BSQUARE Corporation, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4-6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Windstream, a leading provider of advanced network and cloud communications, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Windstream (Nasdaq: WIN), a FORTUNE 500 and S&P 500 company, is a leading provider of advanced network communications, including cloud computing and managed services, to businesses nationwide. The company also offers broadband, phone and digital TV services to consumers primarily in rural areas.
The Internet of Things is not new. Historically, smart businesses have used its basic concept of leveraging data to drive better decision making and have capitalized on those insights to realize additional revenue opportunities. So, what has changed to make the Internet of Things one of the hottest topics in tech? In his session at @ThingsExpo, Chris Gray, Director, Embedded and Internet of Things, discussed the underlying factors that are driving the economics of intelligent systems. Discover how hardware commoditization, the ubiquitous nature of connectivity, and the emergence of Big Data a...

ARMONK, N.Y., Nov. 20, 2014 /PRNewswire/ --  IBM (NYSE: IBM) today announced that it is bringing a greater level of control, security and flexibility to cloud-based application development and delivery with a single-tenant version of Bluemix, IBM's platform-as-a-service. The new platform enables developers to build ap...

DevOps Summit 2015 New York, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that it is now accepting Keynote Proposals. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
"People are a lot more knowledgeable about APIs now. There are two types of people who work with APIs - IT people who want to use APIs for something internal and the product managers who want to do something outside APIs for people to connect to them," explained Roberto Medrano, Executive Vice President at SOA Software, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Nigeria has the largest economy in Africa, at more than US$500 billion, and ranks 23rd in the world. A recent re-evaluation of Nigeria's true economic size doubled the previous estimate, and brought it well ahead of South Africa, which is a member (unlike Nigeria) of the G20 club for political as well as economic reasons. Nigeria's economy can be said to be quite diverse from one point of view, but heavily dependent on oil and gas at the same time. Oil and natural gas account for about 15% of Nigera's overall economy, but traditionally represent more than 90% of the country's exports and as...
The Internet of Things is a misnomer. That implies that everything is on the Internet, and that simply should not be - especially for things that are blurring the line between medical devices that stimulate like a pacemaker and quantified self-sensors like a pedometer or pulse tracker. The mesh of things that we manage must be segmented into zones of trust for sensing data, transmitting data, receiving command and control administrative changes, and peer-to-peer mesh messaging. In his session at @ThingsExpo, Ryan Bagnulo, Solution Architect / Software Engineer at SOA Software, focused on desi...
"At our booth we are showing how to provide trust in the Internet of Things. Trust is where everything starts to become secure and trustworthy. Now with the scaling of the Internet of Things it becomes an interesting question – I've heard numbers from 200 billion devices next year up to a trillion in the next 10 to 15 years," explained Johannes Lintzen, Vice President of Sales at Utimaco, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
"For over 25 years we have been working with a lot of enterprise customers and we have seen how companies create applications. And now that we have moved to cloud computing, mobile, social and the Internet of Things, we see that the market needs a new way of creating applications," stated Jesse Shiah, CEO, President and Co-Founder of AgilePoint Inc., in this SYS-CON.tv interview at 15th Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Gridstore™, the leader in hyper-converged infrastructure purpose-built to optimize Microsoft workloads, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Gridstore™ is the leader in hyper-converged infrastructure purpose-built for Microsoft workloads and designed to accelerate applications in virtualized environments. Gridstore’s hyper-converged infrastructure is the industry’s first all flash version of HyperConverged Appliances that include both compute and storag...
Today’s enterprise is being driven by disruptive competitive and human capital requirements to provide enterprise application access through not only desktops, but also mobile devices. To retrofit existing programs across all these devices using traditional programming methods is very costly and time consuming – often prohibitively so. In his session at @ThingsExpo, Jesse Shiah, CEO, President, and Co-Founder of AgilePoint Inc., discussed how you can create applications that run on all mobile devices as well as laptops and desktops using a visual drag-and-drop application – and eForms-buildi...
We certainly live in interesting technological times. And no more interesting than the current competing IoT standards for connectivity. Various standards bodies, approaches, and ecosystems are vying for mindshare and positioning for a competitive edge. It is clear that when the dust settles, we will have new protocols, evolved protocols, that will change the way we interact with devices and infrastructure. We will also have evolved web protocols, like HTTP/2, that will be changing the very core of our infrastructures. At the same time, we have old approaches made new again like micro-services...
Code Halos - aka "digital fingerprints" - are the key organizing principle to understand a) how dumb things become smart and b) how to monetize this dynamic. In his session at @ThingsExpo, Robert Brown, AVP, Center for the Future of Work at Cognizant Technology Solutions, outlined research, analysis and recommendations from his recently published book on this phenomena on the way leading edge organizations like GE and Disney are unlocking the Internet of Things opportunity and what steps your organization should be taking to position itself for the next platform of digital competition.
The 3rd International Internet of @ThingsExpo, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that its Call for Papers is now open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.
As the Internet of Things unfolds, mobile and wearable devices are blurring the line between physical and digital, integrating ever more closely with our interests, our routines, our daily lives. Contextual computing and smart, sensor-equipped spaces bring the potential to walk through a world that recognizes us and responds accordingly. We become continuous transmitters and receivers of data. In his session at @ThingsExpo, Andrew Bolwell, Director of Innovation for HP's Printing and Personal Systems Group, discussed how key attributes of mobile technology – touch input, sensors, social, and ...