|By Barry Morris||
|December 13, 2013 08:00 AM EST||
In my first post in this three part series I talked about the need for distributed transactional databases that scale-out horizontally across commodity machines, as compared to traditional transactional databases that employ a "scale-up" design. Simply adding more machines is a quicker, cheaper and more flexible way of increasing database capacity than forklift upgrades to giant steam-belching servers. It also brings the promise of continuous availability and of geo-distributed operation.
The second post in this series provided an overview of the three historical approaches to designing distributed transactional database systems, namely: 1. Shared Disk Designs (e.g., ORACLE RAC); 2. Shared Nothing Designs (e.g. the Facebook MySQL implementation); and 3) Synchronous Commit Designs (e.g. GOOGLE F1). All of them have some advantages over traditional client-server database systems, but they each have serious limitations in relation to cost, complexity, dependencies on specialized infrastructure, and workload-specific performance trade-offs. I noted that we are very excited about a recent innovation in distributed database design, introduced by NuoDB's technical founder Jim Starkey. We call the concept Durable Distributed Cache (DDC), and I want to spend a little time in this third and final post talking about what it is, with a high-level overview of how it works.
Memory-Centric vs. Storage-Centric
The first insight Jim had was that all general-purpose relational databases to-date have been architected around a storage-centric assumption, and that this is a fundamental problem when it comes to scaling out. In effect, database systems have been fancy file systems that arrange for concurrent read/write access to disk-based files such that users do not trample on each other. The Durable Distributed Cache architecture inverts that idea, imagining the database as a set of in-memory container objects that can overflow to disk if necessary, and can be retained in backing stores for durability purposes. Memory-Centric vs. Storage-Centric may sound like splitting hairs, but it turns out that it is really significant. The reasons are best described by example.
Suppose you have a single, logical DDC database running on 50 servers (which is absolutely feasible to do with an ACID transactional DDC-based database). And suppose that at some moment server 23 needs object #17. In this case, server 23 might determine that object #17 is instantiated in memory on seven other servers. It simply requests the object from the most responsive server. Note that as the object was in memory, the operation involved no disk IO - it was a remote memory fetch, which is orders of magnitude faster than going to disk.
You might ask about the case in which object #17 does not exist in memory elsewhere. In the Durable Distributed Cache architecture this is handled by certain servers "faking" that they have all the objects in memory. In practice, of course, they are maintaining backing stores on disk, SSD or whatever they choose (in the NuoDB implementation they can use arbitrary Key/Value stores such as Amazon S3 or Hadoop HDFS). As it relates to supplying objects, these "backing store servers" behave exactly like the other servers except they can't guarantee the same response times.
So all servers in the DDC architecture can request objects and supply objects. They are peers in that sense (and in all other senses). Some servers have a subset of the objects at any given time, and can therefore only supply a subset of the database to other servers. Other servers have all the objects and can supply any of them, but will be slower to supply objects that are not resident in memory.
Let's call the servers with a subset of the objects Transaction Engines (TEs), and the other servers Storage Managers (SMs). TEs are pure in memory servers that do not need to use disks. They are autonomous and can unilaterally load and eject objects from memory according to their needs. Unlike TEs, SMs can't just drop objects on the floor when they are finished with them; instead they must ensure they are safely placed in durable storage.
For readers familiar with caching architectures, you might have already recognized that these TEs are in effect a distributed DRAM cache, and the SMs are specialized TEs that ensure durability. Hence the name Durable Distributed Cache.
Resilience to Failure
It turns out that any object state that is present on a TE is either already committed to the disk (i.e. safe on one or more SMs) or part of an uncommitted transaction that will simply fail at application level if the object goes away. This means that the database has the interesting property of being resilient to the loss of TEs. You can shut a TE down or just unplug it and the system does not lose data. It will lose throughput capacity of course, and any partial transactions on the TE will be reported to the application as failed transactions. But transactional applications are designed to handle transaction failure. If you reissue the transaction at the application level it will be assigned to a different TE and will proceed to completion. So the DDC architecture is resilient to the loss of TEs.
What about SMs? Recall that you can have as many SMs as you like. They are effectively just TEs that secretly stash away the objects in some durable store. And, unless you configure it not to, each SM might as well store all the objects. Disks are cheap, which means that you have as many redundant copies of the whole database as you want. In consequence, the DDC architecture is not only resilient to the loss of TEs, but also to the loss of SMs.
In fact, as long as you have at least one TE and one SM running, you still have a running database. Resilience to failure is one of the longstanding but unfulfilled promises of distributed transactional databases. The DDC architecture addresses this directly.
Elastic Scale-out and Scale-in
What happens if I add a server to a DDC database? Think of the TE layer as a cache. If the new TE is given work to do, it will start asking for objects and doing the assigned work. It will also start serving objects to other TEs that need them. In fact, the new TE is a true peer of the other TEs. Furthermore, if you were to shut down all of the other TEs, the database would still be running, and the new TE would be the only server doing transactional work.
SMs, being specialized TEs, can also be added and shut down dynamically. If you add an "empty" (or stale) SM to a running database, it will cycle through the list of objects and load them into its durable store, fetching them from the most responsive place as is usual. Once it has all the objects, it will raise its hand and take part as a peer to the other SMs. And, just as with the new TE described above, you can delete all other SMs once you have added the new SM. The system will keep running without missing a beat or losing any data.
So the bottom line is that the DDC architecture delivers capacity on demand. You can elastically scale-out the number of TEs and SMs and scale them back in again according to workload requirements. Capacity on demand is a second promise of distributed databases that is delivered by the DDC architecture.
The astute reader will no doubt be wondering about the hardest part of this distributed database problem -- namely that we are talking about distributed transactional databases. Transactions, specifically ACID transactions, are an enormously simplifying abstraction that allows application programmers to build their applications with very clean, high-level and well-defined data guarantees. If I store my data in an ACID transactional database, I know it will isolate my program from other programs, maintain data consistency, avoid partial failure of state changes and guarantee that stored data will still be there at a later date, irrespective of external factors. Application programs are vastly simpler when they can trust an ACID compliant database to look after their data, whatever the weather.
The DDC architecture adopts a model of append-only updates. Traditionally, an update to a record in a database overwrites that record, and a deletion of a record removes the record. That may sound obvious, but there is another way, invented by Jim Starkey about 25 years ago. The idea is to create and maintain versions of everything. In this model, you never do a destructive update or destructive delete. You only ever create new versions of records, and in the case of a delete, the new version is a record version that notes the record is no longer extant. This model is called MVCC (multi-version concurrency control), and it has a number of well-known benefits, even in scale-up databases. MVCC has even greater benefits in distributed database architectures, including DDC.
We don't have the space here to cover MVCC in detail, but it is worth noting that one of the things it does is to allow a DBMS to manage read/write concurrency without the use of traditional locks. For example, readers don't block writers and writers do not block readers. It also has some exotic features, including that if you wanted to you could theoretically maintain a full history of the entire database. But as it relates to DDC and the Distributed Transactional Database challenge, there is something very neat about MVCC. DDC leverages a distributed variety of MVCC in concert with DDC's distributed object semantics that allows almost all the inter-server communications to be asynchronous.
The implications of DDC being asynchronous are very far-reaching. In general, it allows much higher utilization of system resources (cores, networks, disks, etc.) than synchronous models can. But specifically, it allows the system to be fairly insensitive to network latencies, and to the location of the servers relative to each other. Or to put it a different way, it means you can start up your next TE or SM in a remote datacenter and connect it to the running database. Or you can start up half of the database servers in your datacenter and the other half on a public cloud.
Modern applications are distributed. Users of a particular web site are usually spread across the globe. Mobile applications are geo-distributed by nature. Internet of Things (IoT) applications are connecting gazillions of consumer devices that could be anywhere at any time. None of these applications are well served by a single big database server in a single location, or even a cluster of smaller database servers in a single location. What they need is a single database running on a group of database servers in multiple datacenters (or cloud regions). That can give them higher performance, datacenter failover and the potential to manage issues of data privacy and sovereignty.
The third historical promise of Distributed Transactional Database systems is Geo-Distribution. Along with the other major promises (Resilience to Failure and Elastic Scalability), Geo-Distribution has heretofore been an unattainable dream. The DDC architecture, with its memory-centric distributed object model and its asynchronous inter-server protocols, finally delivers on this capability.
This short series of posts has sought to provide a quick overview of distributed database designs, with some high level commentary on the advantages and disadvantages of the various approaches. There has been great historical success with Shared Disk, Shared Nothing and Synchronous Commit models. We see the advanced technology companies delivering some of the most scalable systems in the world using these distributed database technologies. But to date, distributed databases have never really delivered anything close to their full promise. They have also been inaccessible to people and organizations that lack the development and financial resources of GOOGLE or Facebook.
With the advent of DDC architectures, it is now possible for any organization to build global systems with transactional semantics, on-demand capacity and the ability to run for 10 years without missing a beat. The big promises of Distributed Transactional Databases are Elastic Scalability and Geo-Distribution. We're very excited that due to Jim Starkey's Durable Distributed Cache, those capabilities are finally being delivered to the industry.
Samsung VP Jacopo Lenzi, who headed the company's recent SmartThings acquisition under the auspices of Samsung's Open Innovaction Center (OIC), answered a few questions we had about the deal. This interview was in conjunction with our interview with SmartThings CEO Alex Hawkinson. IoT Journal: SmartThings was developed in an open, standards-agnostic platform, and will now be part of Samsung's Open Innovation Center. Can you elaborate on your commitment to keep the platform open? Jacopo Lenzi: Samsung recognizes that true, accelerated innovation cannot be driven from one source, but requires a...
Oct. 23, 2014 11:45 PM EDT Reads: 2,410
SYS-CON Events announced today that Red Hat, the world's leading provider of open source solutions, will exhibit at Internet of @ThingsExpo, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Red Hat is the world's leading provider of open source software solutions, using a community-powered approach to reliable and high-performing cloud, Linux, middleware, storage and virtualization technologies. Red Hat also offers award-winning support, training, and consulting services. As the connective hub in a global network of enterprises, partners, a...
Oct. 23, 2014 11:30 PM EDT Reads: 1,534
P2P RTC will impact the landscape of communications, shifting from traditional telephony style communications models to OTT (Over-The-Top) cloud assisted & PaaS (Platform as a Service) communication services. The P2P shift will impact many areas of our lives, from mobile communication, human interactive web services, RTC and telephony infrastructure, user federation, security and privacy implications, business costs, and scalability. In his session at Internet of @ThingsExpo, Robin Raymond, Chief Architect at Hookflash Inc., will walk through the shifting landscape of traditional telephone a...
Oct. 23, 2014 08:15 PM EDT Reads: 1,429
SYS-CON Events announced today that Matrix.org has been named “Silver Sponsor” of Internet of @ThingsExpo, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Matrix is an ambitious new open standard for open, distributed, real-time communication over IP. It defines a new approach for interoperable Instant Messaging and VoIP based on pragmatic HTTP APIs and WebRTC, and provides open source reference implementations to showcase and bootstrap the new standard. Our focus is on simplicity, security, and supporting the fullest feature set.
Oct. 23, 2014 08:00 PM EDT Reads: 1,531
BSQUARE is a global leader of embedded software solutions. We enable smart connected systems at the device level and beyond that millions use every day and provide actionable data solutions for the growing Internet of Things (IoT) market. We empower our world-class customers with our products, services and solutions to achieve innovation and success. For more information, visit www.bsquare.com.
Oct. 23, 2014 08:00 PM EDT Reads: 1,496
How do APIs and IoT relate? The answer is not as simple as merely adding an API on top of a dumb device, but rather about understanding the architectural patterns for implementing an IoT fabric. There are typically two or three trends: Exposing the device to a management framework Exposing that management framework to a business centric logic • Exposing that business layer and data to end users. This last trend is the IoT stack, which involves a new shift in the separation of what stuff happens, where data lives and where the interface lies. For instance, it’s a mix of architectural style...
Oct. 23, 2014 07:45 PM EDT Reads: 1,493
From a software development perspective IoT is about programming "things," about connecting them with each other or integrating them with existing applications. In his session at @ThingsExpo, Yakov Fain, co-founder of Farata Systems and SuranceBay, will show you how small IoT-enabled devices from multiple manufacturers can be integrated into the workflow of an enterprise application. This is a practical demo of building a framework and components in HTML/Java/Mobile technologies to serve as a platform that can integrate new devices as they become available on the market.
Oct. 23, 2014 06:15 PM EDT Reads: 1,485
SYS-CON Events announced today that SOA Software, an API management leader, will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. SOA Software is a leading provider of API Management and SOA Governance products that equip business to deliver APIs and SOA together to drive their company to meet its business strategy quickly and effectively. SOA Software’s technology helps businesses to accelerate their digital channels with APIs, drive partner adoption, monetize their assets, and achieve a...
Oct. 23, 2014 06:15 PM EDT Reads: 1,487
SYS-CON Events announced today that Utimaco will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Utimaco is a leading manufacturer of hardware based security solutions that provide the root of trust to keep cryptographic keys safe, secure critical digital infrastructures and protect high value data assets. Only Utimaco delivers a general-purpose hardware security module (HSM) as a customizable platform to easily integrate into existing software solutions, embed business logic and build s...
Oct. 23, 2014 05:45 PM EDT Reads: 1,434
Connected devices are changing the way we go about our everyday life, from wearables to driverless cars, to smart grids and entire industries revolutionizing business opportunities through smart objects, capable of two-way communication. But what happens when objects are given an IP-address, and we rely on that connection, sometimes with our lives? How do we secure those vast data infrastructures and safe-keep the privacy of sensitive information? This session will outline how each and every connected device can uphold a core root of trust via a unique cryptographic signature – a “bir...
Oct. 23, 2014 05:00 PM EDT Reads: 1,339
Internet of @ThingsExpo Silicon Valley announced on Thursday its first 12 all-star speakers and sessions for its upcoming event, which will take place November 4-6, 2014, at the Santa Clara Convention Center in California. @ThingsExpo, the first and largest IoT event in the world, debuted at the Javits Center in New York City in June 10-12, 2014 with over 6,000 delegates attending the conference. Among the first 12 announced world class speakers, IBM will present two highly popular IoT sessions, which will take place November 4-6, 2014 at the Santa Clara Convention Center in Santa Clara, Calif...
Oct. 23, 2014 01:00 PM EDT Reads: 1,508
Almost everyone sees the potential of Internet of Things but how can businesses truly unlock that potential. The key will be in the ability to discover business insight in the midst of an ocean of Big Data generated from billions of embedded devices via Systems of Discover. Businesses will also need to ensure that they can sustain that insight by leveraging the cloud for global reach, scale and elasticity.
Oct. 22, 2014 09:00 PM EDT Reads: 1,322
WebRTC defines no default signaling protocol, causing fragmentation between WebRTC silos. SIP and XMPP provide possibilities, but come with considerable complexity and are not designed for use in a web environment. In his session at Internet of @ThingsExpo, Matthew Hodgson, technical co-founder of the Matrix.org, will discuss how Matrix is a new non-profit Open Source Project that defines both a new HTTP-based standard for VoIP & IM signaling and provides reference implementations.
Oct. 22, 2014 01:15 PM EDT Reads: 1,502
SUNNYVALE, Calif., Oct. 20, 2014 /PRNewswire/ -- Spansion Inc. (NYSE: CODE), a global leader in embedded systems, today added 96 new products to the Spansion® FM4 Family of flexible microcontrollers (MCUs). Based on the ARM® Cortex®-M4F core, the new MCUs boast a 200 MHz operating frequency and support a diverse set of on-chip peripherals for enhanced human machine interfaces (HMIs) and machine-to-machine (M2M) communications. The rich set of periphera...
Oct. 21, 2014 08:30 PM EDT Reads: 1,569
SYS-CON Events announced today that Aria Systems, the recurring revenue expert, has been named "Bronze Sponsor" of SYS-CON's 15th International Cloud Expo®, which will take place on November 4-6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Aria Systems helps leading businesses connect their customers with the products and services they love. Industry leaders like Pitney Bowes, Experian, AAA NCNU, VMware, HootSuite and many others choose Aria to power their recurring revenue business and deliver exceptional experiences to their customers.
Oct. 21, 2014 06:00 PM EDT Reads: 1,442
The Internet of Things (IoT) is going to require a new way of thinking and of developing software for speed, security and innovation. This requires IT leaders to balance business as usual while anticipating for the next market and technology trends. Cloud provides the right IT asset portfolio to help today’s IT leaders manage the old and prepare for the new. Today the cloud conversation is evolving from private and public to hybrid. This session will provide use cases and insights to reinforce the value of the network in helping organizations to maximize their company’s cloud experience.
Oct. 21, 2014 05:15 PM EDT Reads: 1,518
The Internet of Things (IoT) is making everything it touches smarter – smart devices, smart cars and smart cities. And lucky us, we’re just beginning to reap the benefits as we work toward a networked society. However, this technology-driven innovation is impacting more than just individuals. The IoT has an environmental impact as well, which brings us to the theme of this month’s #IoTuesday Twitter chat. The ability to remove inefficiencies through connected objects is driving change throughout every sector, including waste management. BigBelly Solar, located just outside of Boston, is trans...
Oct. 21, 2014 09:00 AM EDT Reads: 1,703
Oct. 20, 2014 11:45 PM EDT Reads: 1,414
Predicted by Gartner to add $1.9 trillion to the global economy by 2020, the Internet of Everything (IoE) is based on the idea that devices, systems and services will connect in simple, transparent ways, enabling seamless interactions among devices across brands and sectors. As this vision unfolds, it is clear that no single company can accomplish the level of interoperability required to support the horizontal aspects of the IoE. The AllSeen Alliance, announced in December 2013, was formed with the goal to advance IoE adoption and innovation in the connected home, healthcare, education, aut...
Oct. 20, 2014 11:15 PM EDT Reads: 1,821
Oct. 20, 2014 09:45 PM EDT Reads: 1,388