Welcome!

Agile Computing Authors: Elizabeth White, Jnan Dash, AppDynamics Blog, Liz McMillan, John Basso

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Containers Expo Blog, @BigDataExpo, SDN Journal

@CloudExpo: Article

Can We Finally Find the Database Holy Grail? | Part 3

With the advent of Durable Distributed Cache architectures organizations can build global systems with transactional semantics

In my first post in this three part series I talked about the need for distributed transactional databases that scale-out horizontally across commodity machines, as compared to traditional transactional databases that employ a "scale-up" design.  Simply adding more machines is a quicker, cheaper and more flexible way of increasing database capacity than forklift upgrades to giant steam-belching servers. It also brings the promise of continuous availability and of geo-distributed operation.

The second post in this series provided an overview of the three historical approaches to designing distributed transactional database systems, namely: 1. Shared Disk Designs (e.g., ORACLE RAC); 2. Shared Nothing Designs (e.g. the Facebook MySQL implementation); and 3) Synchronous Commit Designs (e.g. GOOGLE F1).  All of them have some advantages over traditional client-server database systems, but they each have serious limitations in relation to cost, complexity, dependencies on specialized infrastructure, and workload-specific performance trade-offs. I noted that we are very excited about a recent innovation in distributed database design, introduced by NuoDB's technical founder Jim Starkey.  We call the concept Durable Distributed Cache (DDC), and I want to spend a little time in this third and final post talking about what it is, with a high-level overview of how it works.

Memory-Centric vs. Storage-Centric
The first insight Jim had was that all general-purpose relational databases to-date have been architected around a storage-centric assumption, and that this is a fundamental problem when it comes to scaling out.  In effect, database systems have been fancy file systems that arrange for concurrent read/write access to disk-based files such that users do not trample on each other.  The Durable Distributed Cache architecture inverts that idea, imagining the database as a set of in-memory container objects that can overflow to disk if necessary, and can be retained in backing stores for durability purposes.  Memory-Centric vs. Storage-Centric may sound like splitting hairs, but it turns out that it is really significant.  The reasons are best described by example.

Suppose you have a single, logical DDC database running on 50 servers (which is absolutely feasible to do with an ACID transactional DDC-based database).  And suppose that at some moment server 23 needs object #17.  In this case, server 23 might determine that object #17 is instantiated in memory on seven other servers.  It simply requests the object from the most responsive server.  Note that as the object was in memory, the operation involved no disk IO - it was a remote memory fetch, which is orders of magnitude faster than going to disk.

You might ask about the case in which object #17 does not exist in memory elsewhere.  In the Durable Distributed Cache architecture this is handled by certain servers "faking" that they have all the objects in memory.  In practice, of course, they are maintaining backing stores on disk, SSD or whatever they choose (in the NuoDB implementation they can use arbitrary Key/Value stores such as Amazon S3 or Hadoop HDFS).  As it relates to supplying objects, these "backing store servers" behave exactly like the other servers except they can't guarantee the same response times.

So all servers in the DDC architecture can request objects and supply objects.  They are peers in that sense (and in all other senses).  Some servers have a subset of the objects at any given time, and can therefore only supply a subset of the database to other servers.  Other servers have all the objects and can supply any of them, but will be slower to supply objects that are not resident in memory.

Let's call the servers with a subset of the objects Transaction Engines (TEs), and the other servers Storage Managers (SMs).  TEs are pure in memory servers that do not need to use disks.  They are autonomous and can unilaterally load and eject objects from memory according to their needs.  Unlike TEs, SMs can't just drop objects on the floor when they are finished with them; instead they must ensure they are safely placed in durable storage.

For readers familiar with caching architectures, you might have already recognized that these TEs are in effect a distributed DRAM cache, and the SMs are specialized TEs that ensure durability.  Hence the name Durable Distributed Cache.

Resilience to Failure
It turns out that any object state that is present on a TE is either already committed to the disk (i.e. safe on one or more SMs) or part of an uncommitted transaction that will simply fail at application level if the object goes away. This means that the database has the interesting property of being resilient to the loss of TEs.  You can shut a TE down or just unplug it and the system does not lose data.  It will lose throughput capacity of course, and any partial transactions on the TE will be reported to the application as failed transactions.  But transactional applications are designed to handle transaction failure. If you reissue the transaction at the application level it will be assigned to a different TE and will proceed to completion.  So the DDC architecture is resilient to the loss of TEs.

What about SMs?  Recall that you can have as many SMs as you like.  They are effectively just TEs that secretly stash away the objects in some durable store.  And, unless you configure it not to, each SM might as well store all the objects. Disks are cheap, which means that you have as many redundant copies of the whole database as you want.  In consequence, the DDC architecture is not only resilient to the loss of TEs, but also to the loss of SMs.

In fact, as long as you have at least one TE and one SM running, you still have a running database.  Resilience to failure is one of the longstanding but unfulfilled promises of distributed transactional databases.  The DDC architecture addresses this directly.

Elastic Scale-out and Scale-in
What happens if I add a server to a DDC database?  Think of the TE layer as a cache.  If the new TE is given work to do, it will start asking for objects and doing the assigned work.  It will also start serving objects to other TEs that need them.  In fact, the new TE is a true peer of the other TEs.  Furthermore, if you were to shut down all of the other TEs, the database would still be running, and the new TE would be the only server doing transactional work.

SMs, being specialized TEs, can also be added and shut down dynamically.  If you add an "empty" (or stale) SM to a running database, it will cycle through the list of objects and load them into its durable store, fetching them from the most responsive place as is usual.  Once it has all the objects, it will raise its hand and take part as a peer to the other SMs.  And, just as with the new TE described above, you can delete all other SMs once you have added the new SM.  The system will keep running without missing a beat or losing any data.

So the bottom line is that the DDC architecture delivers capacity on demand.  You can elastically scale-out the number of TEs and SMs and scale them back in again according to workload requirements.  Capacity on demand is a second promise of distributed databases that is delivered by the DDC architecture.

Geo-Distribution
The astute reader will no doubt be wondering about the hardest part of this distributed database problem -- namely that we are talking about distributed transactional databases.  Transactions, specifically ACID transactions, are an enormously simplifying abstraction that allows application programmers to build their applications with very clean, high-level and well-defined data guarantees.  If I store my data in an ACID transactional database, I know it will isolate my program from other programs, maintain data consistency, avoid partial failure of state changes and guarantee that stored data will still be there at a later date, irrespective of external factors.  Application programs are vastly simpler when they can trust an ACID compliant database to look after their data, whatever the weather.

The DDC architecture adopts a model of append-only updates.  Traditionally, an update to a record in a database overwrites that record, and a deletion of a record removes the record.  That may sound obvious, but there is another way, invented by Jim Starkey about 25 years ago.  The idea is to create and maintain versions of everything.  In this model, you never do a destructive update or destructive delete.  You only ever create new versions of records, and in the case of a delete, the new version is a record version that notes the record is no longer extant.  This model is called MVCC (multi-version concurrency control), and it has a number of well-known benefits, even in scale-up databases.  MVCC has even greater benefits in distributed database architectures, including DDC.

We don't have the space here to cover MVCC in detail, but it is worth noting that one of the things it does is to allow a DBMS to manage read/write concurrency without the use of traditional locks.  For example, readers don't block writers and writers do not block readers.  It also has some exotic features, including that if you wanted to you could theoretically maintain a full history of the entire database.  But as it relates to DDC and the Distributed Transactional Database challenge, there is something very neat about MVCC.  DDC leverages a distributed variety of MVCC in concert with DDC's distributed object semantics that allows almost all the inter-server communications to be asynchronous.

The implications of DDC being asynchronous are very far-reaching.  In general, it allows much higher utilization of system resources (cores, networks, disks, etc.) than synchronous models can.  But specifically, it allows the system to be fairly insensitive to network latencies, and to the location of the servers relative to each other.  Or to put it a different way, it means you can start up your next TE or SM in a remote datacenter and connect it to the running database.  Or you can start up half of the database servers in your datacenter and the other half on a public cloud.

Modern applications are distributed.  Users of a particular web site are usually spread across the globe.  Mobile applications are geo-distributed by nature.  Internet of Things (IoT) applications are connecting gazillions of consumer devices that could be anywhere at any time.  None of these applications are well served by a single big database server in a single location, or even a cluster of smaller database servers in a single location.  What they need is a single database running on a group of database servers in multiple datacenters (or cloud regions).  That can give them higher performance, datacenter failover and the potential to manage issues of data privacy and sovereignty.

The third historical promise of Distributed Transactional Database systems is Geo-Distribution.  Along with the other major promises (Resilience to Failure and Elastic Scalability), Geo-Distribution has heretofore been an unattainable dream.  The DDC architecture, with its memory-centric distributed object model and its asynchronous inter-server protocols, finally delivers on this capability.

In Summary
This short series of posts has sought to provide a quick overview of distributed database designs, with some high level commentary on the advantages and disadvantages of the various approaches.  There has been great historical success with Shared Disk, Shared Nothing and Synchronous Commit models.  We see the advanced technology companies delivering some of the most scalable systems in the world using these distributed database technologies.  But to date, distributed databases have never really delivered anything close to their full promise.  They have also been inaccessible to people and organizations that lack the development and financial resources of GOOGLE or Facebook.

With the advent of DDC architectures, it is now possible for any organization to build global systems with transactional semantics, on-demand capacity and the ability to run for 10 years without missing a beat.  The big promises of Distributed Transactional Databases are Elastic Scalability and Geo-Distribution.  We're very excited that due to Jim Starkey's Durable Distributed Cache, those capabilities are finally being delivered to the industry.

More Stories By Barry Morris

Barry Morris is CEO & Co-Founder of NuoDB, Inc. An accomplished software CEO with over 25 years of industry experience in the USA and Europe, running private and public companies ranging in scale from early startup phase to 1,000+ employees, he loves to build companies around industry-changing paradigm-shifts in technology. Morris was previously CEO of StreamBase and Iona Technologies.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
SYS-CON Events announced today that Peak 10, Inc., a national IT infrastructure and cloud services provider, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Peak 10 provides reliable, tailored data center and network services, cloud and managed services. Its solutions are designed to scale and adapt to customers’ changing business needs, enabling them to lower costs, improve performance and focus inter...
You think you know what’s in your data. But do you? Most organizations are now aware of the business intelligence represented by their data. Data science stands to take this to a level you never thought of – literally. The techniques of data science, when used with the capabilities of Big Data technologies, can make connections you had not yet imagined, helping you discover new insights and ask new questions of your data. In his session at @ThingsExpo, Sarbjit Sarkaria, data science team lead ...
So, you bought into the current machine learning craze and went on to collect millions/billions of records from this promising new data source. Now, what do you do with them? Too often, the abundance of data quickly turns into an abundance of problems. How do you extract that "magic essence" from your data without falling into the common pitfalls? In her session at @ThingsExpo, Natalia Ponomareva, Software Engineer at Google, will provide tips on how to be successful in large scale machine lear...
In his session at @ThingsExpo, Chris Klein, CEO and Co-founder of Rachio, will discuss next generation communities that are using IoT to create more sustainable, intelligent communities. One example is Sterling Ranch, a 10,000 home development that – with the help of Siemens – will integrate IoT technology into the community to provide residents with energy and water savings as well as intelligent security. Everything from stop lights to sprinkler systems to building infrastructures will run ef...
Whether your IoT service is connecting cars, homes, appliances, wearable, cameras or other devices, one question hangs in the balance – how do you actually make money from this service? The ability to turn your IoT service into profit requires the ability to create a monetization strategy that is flexible, scalable and working for you in real-time. It must be a transparent, smoothly implemented strategy that all stakeholders – from customers to the board – will be able to understand and comprehe...
Machine Learning helps make complex systems more efficient. By applying advanced Machine Learning techniques such as Cognitive Fingerprinting, wind project operators can utilize these tools to learn from collected data, detect regular patterns, and optimize their own operations. In his session at 18th Cloud Expo, Stuart Gillen, Director of Business Development at SparkCognition, will discuss how research has demonstrated the value of Machine Learning in delivering next generation analytics to im...
There is an ever-growing explosion of new devices that are connected to the Internet using “cloud” solutions. This rapid growth is creating a massive new demand for efficient access to data. And it’s not just about connecting to that data anymore. This new demand is bringing new issues and challenges and it is important for companies to scale for the coming growth. And with that scaling comes the need for greater security, gathering and data analysis, storage, connectivity and, of course, the...
This is not a small hotel event. It is also not a big vendor party where politicians and entertainers are more important than real content. This is Cloud Expo, the world's longest-running conference and exhibition focused on Cloud Computing and all that it entails. If you want serious presentations and valuable insight about Cloud Computing for three straight days, then register now for Cloud Expo.
IoT device adoption is growing at staggering rates, and with it comes opportunity for developers to meet consumer demand for an ever more connected world. Wireless communication is the key part of the encompassing components of any IoT device. Wireless connectivity enhances the device utility at the expense of ease of use and deployment challenges. Since connectivity is fundamental for IoT device development, engineers must understand how to overcome the hurdles inherent in incorporating multipl...
SYS-CON Events announced today that Stratoscale, the software company developing the next generation data center operating system, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Stratoscale is revolutionizing the data center with a zero-to-cloud-in-minutes solution. With Stratoscale’s hardware-agnostic, Software Defined Data Center (SDDC) solution to store everything, run anything and scale everywhere...
The increasing popularity of the Internet of Things necessitates that our physical and cognitive relationship with wearable technology will change rapidly in the near future. This advent means logging has become a thing of the past. Before, it was on us to track our own data, but now that data is automatically available. What does this mean for mHealth and the "connected" body? In her session at @ThingsExpo, Lisa Calkins, CEO and co-founder of Amadeus Consulting, will discuss the impact of wea...
Angular 2 is a complete re-write of the popular framework AngularJS. Programming in Angular 2 is greatly simplified – now it's a component-based well-performing framework. This immersive one-day workshop at 18th Cloud Expo, led by Yakov Fain, a Java Champion and a co-founder of the IT consultancy Farata Systems and the product company SuranceBay, will provide you with everything you wanted to know about Angular 2.
SYS-CON Events announced today that Men & Mice, the leading global provider of DNS, DHCP and IP address management overlay solutions, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. The Men & Mice Suite overlay solution is already known for its powerful application in heterogeneous operating environments, enabling enterprises to scale without fuss. Building on a solid range of diverse platform support,...
You deployed your app with the Bluemix PaaS and it's gaining some serious traction, so it's time to make some tweaks. Did you design your application in a way that it can scale in the cloud? Were you even thinking about the cloud when you built the app? If not, chances are your app is going to break. Check out this webcast to learn various techniques for designing applications that will scale successfully in Bluemix, for the confidence you need to take your apps to the next level and beyond.
We’ve worked with dozens of early adopters across numerous industries and will debunk common misperceptions, which starts with understanding that many of the connected products we’ll use over the next 5 years are already products, they’re just not yet connected. With an IoT product, time-in-market provides much more essential feedback than ever before. Innovation comes from what you do with the data that the connected product provides in order to enhance the customer experience and optimize busi...
SYS-CON Events announced today that Ericsson has been named “Gold Sponsor” of SYS-CON's @ThingsExpo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. Ericsson is a world leader in the rapidly changing environment of communications technology – providing equipment, software and services to enable transformation through mobility. Some 40 percent of global mobile traffic runs through networks we have supplied. More than 1 billion subscribers around the world re...
Increasing IoT connectivity is forcing enterprises to find elegant solutions to organize and visualize all incoming data from these connected devices with re-configurable dashboard widgets to effectively allow rapid decision-making for everything from immediate actions in tactical situations to strategic analysis and reporting. In his session at 18th Cloud Expo, Shikhir Singh, Senior Developer Relations Manager at Sencha, will discuss how to create HTML5 dashboards that interact with IoT devic...
Artificial Intelligence has the potential to massively disrupt IoT. In his session at 18th Cloud Expo, AJ Abdallat, CEO of Beyond AI, will discuss what the five main drivers are in Artificial Intelligence that could shape the future of the Internet of Things. AJ Abdallat is CEO of Beyond AI. He has over 20 years of management experience in the fields of artificial intelligence, sensors, instruments, devices and software for telecommunications, life sciences, environmental monitoring, process...
Digital payments using wearable devices such as smart watches, fitness trackers, and payment wristbands are an increasing area of focus for industry participants, and consumer acceptance from early trials and deployments has encouraged some of the biggest names in technology and banking to continue their push to drive growth in this nascent market. Wearable payment systems may utilize near field communication (NFC), radio frequency identification (RFID), or quick response (QR) codes and barcodes...
SYS-CON Events announced today that Fusion, a leading provider of cloud services, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Fusion, a leading provider of integrated cloud solutions to small, medium and large businesses, is the industry's single source for the cloud. Fusion's advanced, proprietary cloud service platform enables the integration of leading edge solutions in the cloud, including cloud...