Click here to close now.


Agile Computing Authors: SmartBear Blog, Liz McMillan, Yeshim Deniz, Jayaram Krishnaswamy, Pat Romanski

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog, API Journal, Agile Computing, Cloud Security

@CloudExpo: Blog Feed Post

Using Cloud for Disaster Recovery

Business Case - Best Practices and Lessons Learned

Use of cloud for DR solutions is becoming more common, even the organizations which are not using cloud for mission critical production applications are moving towards using cloud for application DR.

Business Case for Using Cloud for the DR

  1. Faster Recovery Time Objective (RTO): Typically DR requires lengthy manual processes to fully restore the business applications at the DR site.  Having backup data and servers at the DR site is easy, however, restoring the entire application or service takes time.  E.g. full application restoration requires starting services in specified order, performing dns and other configuration updates etc.  In Cloud, the IaaS APIs provide ability to use automation solutions like Kaavo IMOD to fully restore the business applications automatically without manual intervention.  As a result organizations get predictable recovery and reduced RTO.  Automating the service or application recovery can reduce RTO to minutes from hours or days.

  2. Shorter Recovery Point Objective (RPO): Instead of relying on offsite tape backups, organizations can reduce their RPO to minutes by maintaining near real-time data backups in the Cloud.  For faster transfer of large data dedicated lines can be established between the customer datacenters and the cloud.  The cost of the dedicated line depends on the distance of the customer datacenter from the cloud providers' peering point.  For most use cases VPN lines over internet are sufficient for transferring data between customer datacenter and the cloud.

  3. Lower Costs: Typically organizations pay high price for standby infrastructure, especially servers at the DR site.  Using cloud there is no need to pay for the servers when they are not in use at the DR site.  Pay as you use infrastructure model significantly reduces DR costs without compromising the service levels.

Following are some of the best practices and lessons learned from the Cloud DR solutions we have implemented so far:

Cloud DR is Different than Traditional DR
Unlike traditional DR solutions which relies on having a backup infrastructure for the entire datacenter requiring large and costly implementation, Cloud DR can be implemented incrementally application by application.  For example it is common for organizations to have a large shared database with multiple schemas supporting various applications.  In majority of cases this sharing is driven by server consolidation to increase the utilization of internal infrastructure.  Not all applications using a shared database have same service level requirements.  Some applications are more critical than others, so as long as schemas and application data is different, it is better to remove the dependency on shared database by having the right size database for each application in the cloud.  This allows optimal prioritization and incremental delivery of the DR project based on the service levels of the individual applications.

Migration of Applications Using Single Sign-on with LDAP
When planning DR for individual applications it is important to identify the dependent services and making sure that the dependent services would be available as a part of the DR solution.  Enterprise customers typically use Single Sign-on with LDAP for managing authentication.  So best practice is to treat the Single Sign-on Service as the critical application and implement the DR solution for bringing up the Single Sign-on Service first during the DR process.  An automation solution like Kaavo IMOD enables customers to restore applications and services in the specified order automatically during DR without any manual intervention. During a real DR scenario there are many things going and it is easy to make mistakes under pressure if the application restoration process is not fully automated.  To prevent surprises during actual DR, it is important to have a fully automated solution for restoring applications and services.

Restoring Back to Normal Operations after DR
This is one area which is often overlooked or under planned in DR projects.  For companies using their own datacenters for production applications and using cloud for DR, processes and automation must be implemented to fully restore the applications in the customer production datacenter using the latest data from the cloud DR once the primary datacenter is back online.  This step is not required for applications which are using cloud as their primary site.  E.g. if an application is running in one cloud zone and after DR it is running in a different cloud zone there is no need to restore it back to the first cloud zone as long as service levels for both cloud zones are same.  If you are deploying new applications it best to design for failure.  E.g. a distributed application running across various regions and cloud providers eliminate the need for traditional DR planning for the application as handling of failure of individual components is built in the design and deployment model of the application.

Handling Compliance in Cloud, e.g., HIPAA, PCI, SOX, SAS-70 etc.
Using available security technologies and processes several companies have implemented applications in the cloud compliant to various compliance standards, e.g. HIPAA, PCI, SOX, SAS-70 etc.  Each compliance standard has its own nuances; basically with proper planning you can address all compliance related issues.  This is a big topic on its own so please contact us if you have specific questions about this.  Cloud providers have published various case studies and best practices, e.g. white paper by Amazon on HIPAA compliance.

Handling Public and Private DNS
A common use case for enterprise applications is to have a public DNS for public access and a private DNS over internal network for accessing the backend services and databases etc.  In these situations it is best to use virtual private cloud like AWS VPC or to overlay a private network with the same IP address range as internal datacenter on any public cloud using Open Source solutions (refer to this blog - Building a Private Cloud within a Public Cloud for details on how to implement a secure private network on any public cloud).  For updating the public DNS entries for the restored application in the cloud we use DNS automation services like AWS Route 53 or EasyDNS.  Leveraging these services, Kaavo IMOD automatically updates the Public DNS for the applications as a part of the restoration during DR.

Keeping Application Database Up-To-Date
It is common for applications to have large databases.  Moving the data to the cloud and keeping it current requires first loading the entire database in cloud and then sending and merging incremental data to the database in the cloud.  To address this use case instead of maintaining a hot backup we use Kaavo IMOD to automatically bring up the database servers in cloud whenever the new incremental backup is available and merge the incremental backup then save the merged database and shutdown the servers in the cloud.  This way in case of DR we always have the latest merged database available for restoring the application. This approach provides reasonable RTO without incurring the additional costs of maintaining a hot database backup.

Applying and Maintaining Patches
A typical application requires following two types of updates during its lifecycle:

  1. Updating Application Code: This is quite easy as using Kaavo IMOD we setup automation to pick up the latest code and configuration for the application from the production deployment.  This automation ensures that the application code and configuration changes for the new release of the application or service are available in the cloud for the DR.

  2. OS Patches and Third-Party Software Updates: Sometimes custom patches or updates to third party software or OS are required.  For these types of changes it is best to include them as a part of change control process requiring sign-off from the team owning the DR process.  The DR team can review the change and if required make and test the needed changes to DR automation for the application.

Read the original blog entry...

More Stories By Jamal Mazhar

Jamal Mazhar is Founder & CEO of Kaavo. He possesses more than 15 years of experience in technology, engineering and consulting with a range of Fortune 500 companies including GE and ING. He established ING’s “Center of Excellence for B2B” which streamlined $2 billion per month in electronic money transfer operations. As Lead Architect at GE Capital e-Business team, Jamal directed analysis and implementation efforts and improved the performance of the website generating more than $1 billion in annual lease revenues. At Trilogy he provided technical and managerial expertise for several large scale e-business implementation projects for companies such as Boeing, NCR, Gartner, British Airways, Quantas Airways and Alltel. Jamal has BS in Electrical and Computer Engineering from the University of Texas at Austin and MBA from NYU Stern School of Business.

@ThingsExpo Stories
The broad selection of hardware, the rapid evolution of operating systems and the time-to-market for mobile apps has been so rapid that new challenges for developers and engineers arise every day. Security, testing, hosting, and other metrics have to be considered through the process. In his session at Big Data Expo, Walter Maguire, Chief Field Technologist, HP Big Data Group, at Hewlett-Packard, will discuss the challenges faced by developers and a composite Big Data applications builder, focusing on how to help solve the problems that developers are continuously battling.
Nowadays, a large number of sensors and devices are connected to the network. Leading-edge IoT technologies integrate various types of sensor data to create a new value for several business decision scenarios. The transparent cloud is a model of a new IoT emergence service platform. Many service providers store and access various types of sensor data in order to create and find out new business values by integrating such data.
NHK, Japan Broadcasting will feature upcoming @ThingsExpo Silicon Valley in a special IoT documentary which will be filmed on the expo floor November 3 to 5, 2015 in Santa Clara. NHK is the sole public TV network in Japan equivalent to BBC in UK and the largest in Asia with many award winning science and technology programs. Japanese TV is producing a documentary about IoT and Smart technology covering @ThingsExpo Silicon Valley. The program will be aired during the highest viewership season of the year that it will have a high impact in the industry through this documentary in Japan. The film...
SYS-CON Events announced today that Luxoft Holding, Inc., a leading provider of software development services and innovative IT solutions, has been named “Bronze Sponsor” of SYS-CON's @ThingsExpo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Luxoft’s software development services consist of core and mission-critical custom software development and support, product engineering and testing, and technology consulting.
Developing software for the Internet of Things (IoT) comes with its own set of challenges. Security, privacy, and unified standards are a few key issues. In addition, each IoT product is comprised of at least three separate application components: the software embedded in the device, the backend big-data service, and the mobile application for the end user's controls. Each component is developed by a different team, using different technologies and practices, and deployed to a different stack/target - this makes the integration of these separate pipelines and the coordination of software upd...
SYS-CON Events announced today that IBM Cloud Data Services has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. IBM Cloud Data Services offers a portfolio of integrated, best-of-breed cloud data services for developers focused on mobile computing and analytics use cases.
In his session at @ThingsExpo, Tony Shan, Chief Architect at CTS, will explore the synergy of Big Data and IoT. First he will take a closer look at the Internet of Things and Big Data individually, in terms of what, which, why, where, when, who, how and how much. Then he will explore the relationship between IoT and Big Data. Specifically, he will drill down to how the 4Vs aspects intersect with IoT: Volume, Variety, Velocity and Value. In turn, Tony will analyze how the key components of IoT influence Big Data: Device, Connectivity, Context, and Intelligence. He will dive deep to the matrix...
When it comes to IoT in the enterprise, namely the commercial building and hospitality markets, a benefit not getting the attention it deserves is energy efficiency, and IoT’s direct impact on a cleaner, greener environment when installed in smart buildings. Until now clean technology was offered piecemeal and led with point solutions that require significant systems integration to orchestrate and deploy. There didn't exist a 'top down' approach that can manage and monitor the way a Smart Building actually breathes - immediately flagging overheating in a closet or over cooling in unoccupied ho...
SYS-CON Events announced today that Cloud Raxak has been named “Media & Session Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Raxak Protect automates security compliance across private and public clouds. Using the SaaS tool or managed service, developers can deploy cloud apps quickly, cost-effectively, and without error.
Scott Guthrie's keynote presentation "Journey to the intelligent cloud" is a must view video. This is from AzureCon 2015, September 29, 2015 I have reproduced some screen shots in case you are unable to view this long video for one reason or another. One of the highlights is 3 datacenters coming on line in India.
“The Internet of Things transforms the way organizations leverage machine data and gain insights from it,” noted Splunk’s CTO Snehal Antani, as Splunk announced accelerated momentum in Industrial Data and the IoT. The trend is driven by Splunk’s continued investment in its products and partner ecosystem as well as the creativity of customers and the flexibility to deploy Splunk IoT solutions as software, cloud services or in a hybrid environment. Customers are using Splunk® solutions to collect and correlate data from control systems, sensors, mobile devices and IT systems for a variety of Ind...
SYS-CON Events announced today that ProfitBricks, the provider of painless cloud infrastructure, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. ProfitBricks is the IaaS provider that offers a painless cloud experience for all IT users, with no learning curve. ProfitBricks boasts flexible cloud servers and networking, an integrated Data Center Designer tool for visual control over the cloud and the best price/performance value available. ProfitBricks was named one of the coolest Clo...
You have your devices and your data, but what about the rest of your Internet of Things story? Two popular classes of technologies that nicely handle the Big Data analytics for Internet of Things are Apache Hadoop and NoSQL. Hadoop is designed for parallelizing analytical work across many servers and is ideal for the massive data volumes you create with IoT devices. NoSQL databases such as Apache HBase are ideal for storing and retrieving IoT data as “time series data.”
Clearly the way forward is to move to cloud be it bare metal, VMs or containers. One aspect of the current public clouds that is slowing this cloud migration is cloud lock-in. Every cloud vendor is trying to make it very difficult to move out once a customer has chosen their cloud. In his session at 17th Cloud Expo, Naveen Nimmu, CEO of Clouber, Inc., will advocate that making the inter-cloud migration as simple as changing airlines would help the entire industry to quickly adopt the cloud without worrying about any lock-in fears. In fact by having standard APIs for IaaS would help PaaS expl...
Organizations already struggle with the simple collection of data resulting from the proliferation of IoT, lacking the right infrastructure to manage it. They can't only rely on the cloud to collect and utilize this data because many applications still require dedicated infrastructure for security, redundancy, performance, etc. In his session at 17th Cloud Expo, Emil Sayegh, CEO of Codero Hosting, will discuss how in order to resolve the inherent issues, companies need to combine dedicated and cloud solutions through hybrid hosting – a sustainable solution for the data required to manage I...
Apps and devices shouldn't stop working when there's limited or no network connectivity. Learn how to bring data stored in a cloud database to the edge of the network (and back again) whenever an Internet connection is available. In his session at 17th Cloud Expo, Bradley Holt, Developer Advocate at IBM Cloud Data Services, will demonstrate techniques for replicating cloud databases with devices in order to build offline-first mobile or Internet of Things (IoT) apps that can provide a better, faster user experience, both offline and online. The focus of this talk will be on IBM Cloudant, Apa...
Mobile messaging has been a popular communication channel for more than 20 years. Finnish engineer Matti Makkonen invented the idea for SMS (Short Message Service) in 1984, making his vision a reality on December 3, 1992 by sending the first message ("Happy Christmas") from a PC to a cell phone. Since then, the technology has evolved immensely, from both a technology standpoint, and in our everyday uses for it. Originally used for person-to-person (P2P) communication, i.e., Sally sends a text message to Betty – mobile messaging now offers tremendous value to businesses for customer and empl...
As more and more data is generated from a variety of connected devices, the need to get insights from this data and predict future behavior and trends is increasingly essential for businesses. Real-time stream processing is needed in a variety of different industries such as Manufacturing, Oil and Gas, Automobile, Finance, Online Retail, Smart Grids, and Healthcare. Azure Stream Analytics is a fully managed distributed stream computation service that provides low latency, scalable processing of streaming data in the cloud with an enterprise grade SLA. It features built-in integration with Azur...
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
The enterprise is being consumerized, and the consumer is being enterprised. Moore's Law does not matter anymore, the future belongs to business virtualization powered by invisible service architecture, powered by hyperscale and hyperconvergence, and facilitated by vertical streaming and horizontal scaling and consolidation. Both buyers and sellers want instant results, and from paperwork to paperless to mindless is the ultimate goal for any seamless transaction. The sweetest sweet spot in innovation is automation. The most painful pain point for any business is the mismatch between supplies a...