|By Barry Morris||
|December 13, 2013 08:00 AM EST||
In my first post in this three part series I talked about the need for distributed transactional databases that scale-out horizontally across commodity machines, as compared to traditional transactional databases that employ a "scale-up" design. Simply adding more machines is a quicker, cheaper and more flexible way of increasing database capacity than forklift upgrades to giant steam-belching servers. It also brings the promise of continuous availability and of geo-distributed operation.
The second post in this series provided an overview of the three historical approaches to designing distributed transactional database systems, namely: 1. Shared Disk Designs (e.g., ORACLE RAC); 2. Shared Nothing Designs (e.g. the Facebook MySQL implementation); and 3) Synchronous Commit Designs (e.g. GOOGLE F1). All of them have some advantages over traditional client-server database systems, but they each have serious limitations in relation to cost, complexity, dependencies on specialized infrastructure, and workload-specific performance trade-offs. I noted that we are very excited about a recent innovation in distributed database design, introduced by NuoDB's technical founder Jim Starkey. We call the concept Durable Distributed Cache (DDC), and I want to spend a little time in this third and final post talking about what it is, with a high-level overview of how it works.
Memory-Centric vs. Storage-Centric
The first insight Jim had was that all general-purpose relational databases to-date have been architected around a storage-centric assumption, and that this is a fundamental problem when it comes to scaling out. In effect, database systems have been fancy file systems that arrange for concurrent read/write access to disk-based files such that users do not trample on each other. The Durable Distributed Cache architecture inverts that idea, imagining the database as a set of in-memory container objects that can overflow to disk if necessary, and can be retained in backing stores for durability purposes. Memory-Centric vs. Storage-Centric may sound like splitting hairs, but it turns out that it is really significant. The reasons are best described by example.
Suppose you have a single, logical DDC database running on 50 servers (which is absolutely feasible to do with an ACID transactional DDC-based database). And suppose that at some moment server 23 needs object #17. In this case, server 23 might determine that object #17 is instantiated in memory on seven other servers. It simply requests the object from the most responsive server. Note that as the object was in memory, the operation involved no disk IO - it was a remote memory fetch, which is orders of magnitude faster than going to disk.
You might ask about the case in which object #17 does not exist in memory elsewhere. In the Durable Distributed Cache architecture this is handled by certain servers "faking" that they have all the objects in memory. In practice, of course, they are maintaining backing stores on disk, SSD or whatever they choose (in the NuoDB implementation they can use arbitrary Key/Value stores such as Amazon S3 or Hadoop HDFS). As it relates to supplying objects, these "backing store servers" behave exactly like the other servers except they can't guarantee the same response times.
So all servers in the DDC architecture can request objects and supply objects. They are peers in that sense (and in all other senses). Some servers have a subset of the objects at any given time, and can therefore only supply a subset of the database to other servers. Other servers have all the objects and can supply any of them, but will be slower to supply objects that are not resident in memory.
Let's call the servers with a subset of the objects Transaction Engines (TEs), and the other servers Storage Managers (SMs). TEs are pure in memory servers that do not need to use disks. They are autonomous and can unilaterally load and eject objects from memory according to their needs. Unlike TEs, SMs can't just drop objects on the floor when they are finished with them; instead they must ensure they are safely placed in durable storage.
For readers familiar with caching architectures, you might have already recognized that these TEs are in effect a distributed DRAM cache, and the SMs are specialized TEs that ensure durability. Hence the name Durable Distributed Cache.
Resilience to Failure
It turns out that any object state that is present on a TE is either already committed to the disk (i.e. safe on one or more SMs) or part of an uncommitted transaction that will simply fail at application level if the object goes away. This means that the database has the interesting property of being resilient to the loss of TEs. You can shut a TE down or just unplug it and the system does not lose data. It will lose throughput capacity of course, and any partial transactions on the TE will be reported to the application as failed transactions. But transactional applications are designed to handle transaction failure. If you reissue the transaction at the application level it will be assigned to a different TE and will proceed to completion. So the DDC architecture is resilient to the loss of TEs.
What about SMs? Recall that you can have as many SMs as you like. They are effectively just TEs that secretly stash away the objects in some durable store. And, unless you configure it not to, each SM might as well store all the objects. Disks are cheap, which means that you have as many redundant copies of the whole database as you want. In consequence, the DDC architecture is not only resilient to the loss of TEs, but also to the loss of SMs.
In fact, as long as you have at least one TE and one SM running, you still have a running database. Resilience to failure is one of the longstanding but unfulfilled promises of distributed transactional databases. The DDC architecture addresses this directly.
Elastic Scale-out and Scale-in
What happens if I add a server to a DDC database? Think of the TE layer as a cache. If the new TE is given work to do, it will start asking for objects and doing the assigned work. It will also start serving objects to other TEs that need them. In fact, the new TE is a true peer of the other TEs. Furthermore, if you were to shut down all of the other TEs, the database would still be running, and the new TE would be the only server doing transactional work.
SMs, being specialized TEs, can also be added and shut down dynamically. If you add an "empty" (or stale) SM to a running database, it will cycle through the list of objects and load them into its durable store, fetching them from the most responsive place as is usual. Once it has all the objects, it will raise its hand and take part as a peer to the other SMs. And, just as with the new TE described above, you can delete all other SMs once you have added the new SM. The system will keep running without missing a beat or losing any data.
So the bottom line is that the DDC architecture delivers capacity on demand. You can elastically scale-out the number of TEs and SMs and scale them back in again according to workload requirements. Capacity on demand is a second promise of distributed databases that is delivered by the DDC architecture.
The astute reader will no doubt be wondering about the hardest part of this distributed database problem -- namely that we are talking about distributed transactional databases. Transactions, specifically ACID transactions, are an enormously simplifying abstraction that allows application programmers to build their applications with very clean, high-level and well-defined data guarantees. If I store my data in an ACID transactional database, I know it will isolate my program from other programs, maintain data consistency, avoid partial failure of state changes and guarantee that stored data will still be there at a later date, irrespective of external factors. Application programs are vastly simpler when they can trust an ACID compliant database to look after their data, whatever the weather.
The DDC architecture adopts a model of append-only updates. Traditionally, an update to a record in a database overwrites that record, and a deletion of a record removes the record. That may sound obvious, but there is another way, invented by Jim Starkey about 25 years ago. The idea is to create and maintain versions of everything. In this model, you never do a destructive update or destructive delete. You only ever create new versions of records, and in the case of a delete, the new version is a record version that notes the record is no longer extant. This model is called MVCC (multi-version concurrency control), and it has a number of well-known benefits, even in scale-up databases. MVCC has even greater benefits in distributed database architectures, including DDC.
We don't have the space here to cover MVCC in detail, but it is worth noting that one of the things it does is to allow a DBMS to manage read/write concurrency without the use of traditional locks. For example, readers don't block writers and writers do not block readers. It also has some exotic features, including that if you wanted to you could theoretically maintain a full history of the entire database. But as it relates to DDC and the Distributed Transactional Database challenge, there is something very neat about MVCC. DDC leverages a distributed variety of MVCC in concert with DDC's distributed object semantics that allows almost all the inter-server communications to be asynchronous.
The implications of DDC being asynchronous are very far-reaching. In general, it allows much higher utilization of system resources (cores, networks, disks, etc.) than synchronous models can. But specifically, it allows the system to be fairly insensitive to network latencies, and to the location of the servers relative to each other. Or to put it a different way, it means you can start up your next TE or SM in a remote datacenter and connect it to the running database. Or you can start up half of the database servers in your datacenter and the other half on a public cloud.
Modern applications are distributed. Users of a particular web site are usually spread across the globe. Mobile applications are geo-distributed by nature. Internet of Things (IoT) applications are connecting gazillions of consumer devices that could be anywhere at any time. None of these applications are well served by a single big database server in a single location, or even a cluster of smaller database servers in a single location. What they need is a single database running on a group of database servers in multiple datacenters (or cloud regions). That can give them higher performance, datacenter failover and the potential to manage issues of data privacy and sovereignty.
The third historical promise of Distributed Transactional Database systems is Geo-Distribution. Along with the other major promises (Resilience to Failure and Elastic Scalability), Geo-Distribution has heretofore been an unattainable dream. The DDC architecture, with its memory-centric distributed object model and its asynchronous inter-server protocols, finally delivers on this capability.
This short series of posts has sought to provide a quick overview of distributed database designs, with some high level commentary on the advantages and disadvantages of the various approaches. There has been great historical success with Shared Disk, Shared Nothing and Synchronous Commit models. We see the advanced technology companies delivering some of the most scalable systems in the world using these distributed database technologies. But to date, distributed databases have never really delivered anything close to their full promise. They have also been inaccessible to people and organizations that lack the development and financial resources of GOOGLE or Facebook.
With the advent of DDC architectures, it is now possible for any organization to build global systems with transactional semantics, on-demand capacity and the ability to run for 10 years without missing a beat. The big promises of Distributed Transactional Databases are Elastic Scalability and Geo-Distribution. We're very excited that due to Jim Starkey's Durable Distributed Cache, those capabilities are finally being delivered to the industry.
Extracting business value from Internet of Things (IoT) data doesn’t happen overnight. There are several requirements that must be satisfied, including IoT device enablement, data analysis, real-time detection of complex events and automated orchestration of actions. Unfortunately, too many companies fall short in achieving their business goals by implementing incomplete solutions or not focusing on tangible use cases. In his general session at @ThingsExpo, Dave McCarthy, Director of Products...
Jul. 23, 2016 03:00 PM EDT Reads: 1,640
Ask someone to architect an Internet of Things (IoT) solution and you are guaranteed to see a reference to the cloud. This would lead you to believe that IoT requires the cloud to exist. However, there are many IoT use cases where the cloud is not feasible or desirable. In his session at @ThingsExpo, Dave McCarthy, Director of Products at Bsquare Corporation, will discuss the strategies that exist to extend intelligence directly to IoT devices and sensors, freeing them from the constraints of ...
Jul. 23, 2016 02:30 PM EDT Reads: 1,693
WebRTC is bringing significant change to the communications landscape that will bridge the worlds of web and telephony, making the Internet the new standard for communications. Cloud9 took the road less traveled and used WebRTC to create a downloadable enterprise-grade communications platform that is changing the communication dynamic in the financial sector. In his session at @ThingsExpo, Leo Papadopoulos, CTO of Cloud9, discussed the importance of WebRTC and how it enables companies to focus...
Jul. 23, 2016 02:15 PM EDT Reads: 608
The best-practices for building IoT applications with Go Code that attendees can use to build their own IoT applications. In his session at @ThingsExpo, Indraneel Mitra, Senior Solutions Architect & Technology Evangelist at Cognizant, provided valuable information and resources for both novice and experienced developers on how to get started with IoT and Golang in a day. He also provided information on how to use Intel Arduino Kit, Go Robotics API and AWS IoT stack to build an application tha...
Jul. 23, 2016 01:00 PM EDT Reads: 805
With an estimated 50 billion devices connected to the Internet by 2020, several industries will begin to expand their capabilities for retaining end point data at the edge to better utilize the range of data types and sheer volume of M2M data generated by the Internet of Things. In his session at @ThingsExpo, Don DeLoach, CEO and President of Infobright, discussed the infrastructures businesses will need to implement to handle this explosion of data by providing specific use cases for filterin...
Jul. 23, 2016 12:00 PM EDT Reads: 1,106
Is your aging software platform suffering from technical debt while the market changes and demands new solutions at a faster clip? It’s a bold move, but you might consider walking away from your core platform and starting fresh. ReadyTalk did exactly that. In his General Session at 19th Cloud Expo, Michael Chambliss, Head of Engineering at ReadyTalk, will discuss why and how ReadyTalk diverted from healthy revenue and over a decade of audio conferencing product development to start an innovati...
Jul. 23, 2016 12:00 PM EDT Reads: 808
Early adopters of IoT viewed it mainly as a different term for machine-to-machine connectivity or M2M. This is understandable since a prerequisite for any IoT solution is the ability to collect and aggregate device data, which is most often presented in a dashboard. The problem is that viewing data in a dashboard requires a human to interpret the results and take manual action, which doesn’t scale to the needs of IoT.
Jul. 23, 2016 11:00 AM EDT Reads: 1,812
So, you bought into the current machine learning craze and went on to collect millions/billions of records from this promising new data source. Now, what do you do with them? Too often, the abundance of data quickly turns into an abundance of problems. How do you extract that "magic essence" from your data without falling into the common pitfalls? In her session at @ThingsExpo, Natalia Ponomareva, Software Engineer at Google, provided tips on how to be successful in large scale machine learning...
Jul. 23, 2016 11:00 AM EDT Reads: 1,136
What does it look like when you have access to cloud infrastructure and platform under the same roof? Let’s talk about the different layers of Technology as a Service: who cares, what runs where, and how does it all fit together. In his session at 18th Cloud Expo, Phil Jackson, Lead Technology Evangelist at SoftLayer, an IBM company, spoke about the picture being painted by IBM Cloud and how the tools being crafted can help fill the gaps in your IT infrastructure.
Jul. 23, 2016 10:45 AM EDT Reads: 1,962
"delaPlex is a software development company. We do team-based outsourcing development," explained Mark Rivers, COO and Co-founder of delaPlex Software, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Jul. 23, 2016 10:00 AM EDT Reads: 1,909
"C2M is our digital transformation and IoT platform. We've had C2M on the market for almost three years now and it has a comprehensive set of functionalities that it brings to the market," explained Mahesh Ramu, Vice President, IoT Strategy and Operations at Plasma, in this SYS-CON.tv interview at @ThingsExpo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Jul. 23, 2016 10:00 AM EDT Reads: 1,038
Traditional IT, great for stable systems of record, is struggling to cope with newer, agile systems of engagement requirements coming straight from the business. In his session at 18th Cloud Expo, William Morrish, General Manager of Product Sales at Interoute, outlined ways of exploiting new architectures to enable both systems and building them to support your existing platforms, with an eye for the future. Technologies such as Docker and the hyper-convergence of computing, networking and sto...
Jul. 23, 2016 09:45 AM EDT Reads: 961
Whether your IoT service is connecting cars, homes, appliances, wearable, cameras or other devices, one question hangs in the balance – how do you actually make money from this service? The ability to turn your IoT service into profit requires the ability to create a monetization strategy that is flexible, scalable and working for you in real-time. It must be a transparent, smoothly implemented strategy that all stakeholders – from customers to the board – will be able to understand and comprehe...
Jul. 23, 2016 09:45 AM EDT Reads: 1,731
SYS-CON Events announced today that LeaseWeb USA, a cloud Infrastructure-as-a-Service (IaaS) provider, will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. LeaseWeb is one of the world's largest hosting brands. The company helps customers define, develop and deploy IT infrastructure tailored to their exact business needs, by combining various kinds cloud solutions.
Jul. 23, 2016 09:15 AM EDT Reads: 1,017
The cloud market growth today is largely in public clouds. While there is a lot of spend in IT departments in virtualization, these aren’t yet translating into a true “cloud” experience within the enterprise. What is stopping the growth of the “private cloud” market? In his general session at 18th Cloud Expo, Nara Rajagopalan, CEO of Accelerite, explored the challenges in deploying, managing, and getting adoption for a private cloud within an enterprise. What are the key differences between wh...
Jul. 23, 2016 09:00 AM EDT Reads: 1,926
It’s 2016: buildings are smart, connected and the IoT is fundamentally altering how control and operating systems work and speak to each other. Platforms across the enterprise are networked via inexpensive sensors to collect massive amounts of data for analytics, information management, and insights that can be used to continuously improve operations. In his session at @ThingsExpo, Brian Chemel, Co-Founder and CTO of Digital Lumens, will explore: The benefits sensor-networked systems bring to ...
Jul. 23, 2016 08:15 AM EDT Reads: 1,443
Large scale deployments present unique planning challenges, system commissioning hurdles between IT and OT and demand careful system hand-off orchestration. In his session at @ThingsExpo, Jeff Smith, Senior Director and a founding member of Incenergy, will discuss some of the key tactics to ensure delivery success based on his experience of the last two years deploying Industrial IoT systems across four continents.
Jul. 23, 2016 08:00 AM EDT Reads: 1,401
Much of IT terminology is often misused and misapplied. Modernization and transformation are two such terms. They are often used interchangeably even though they mean different things and have very different connotations. Indeed, it is somewhat safe to assume that in IT any transformative effort is likely to also have a modernizing effect, and thus, we can see these as levels of improvement efforts. However, many businesses are being led to believe if they don’t transform now they risk becoming ...
Jul. 23, 2016 08:00 AM EDT Reads: 1,060
SYS-CON Events announced today the Enterprise IoT Bootcamp, being held November 1-2, 2016, in conjunction with 19th Cloud Expo | @ThingsExpo at the Santa Clara Convention Center in Santa Clara, CA. Combined with real-world scenarios and use cases, the Enterprise IoT Bootcamp is not just based on presentations but with hands-on demos and detailed walkthroughs. We will introduce you to a variety of real world use cases prototyped using Arduino, Raspberry Pi, BeagleBone, Spark, and Intel Edison. Y...
Jul. 23, 2016 08:00 AM EDT Reads: 1,248
Identity is in everything and customers are looking to their providers to ensure the security of their identities, transactions and data. With the increased reliance on cloud-based services, service providers must build security and trust into their offerings, adding value to customers and improving the user experience. Making identity, security and privacy easy for customers provides a unique advantage over the competition.
Jul. 23, 2016 07:45 AM EDT Reads: 978