Click here to close now.


Agile Computing Authors: Carmen Gonzalez, Pat Romanski, SmartBear Blog, Anders Wallgren, Victoria Livschitz

Related Topics: Apache, Microservices Expo, Agile Computing, @CloudExpo

Apache: Article

The Answer Is the Cloud – Now What’s the Question?

Cloud Computing represents the new way for businesses to re-connect with their customers

In Lewis Carroll's classic story "Through the Looking Glass," Humpty Dumpty remarked: "When I use a word, it means just what I choose it to mean - neither more nor less." It seems that the same principle applies to almost any industry expert and IT vendor when they talk about Cloud Computing. So, in an effort not to fall into the same trap as Humpty Dumpty, let's start with the obvious first question:

What exactly is Cloud Computing?
The most authoritative definition is from the National Institute of Science and Technology (NIST), the U.S. federal technology agency that works with industry to develop and apply technology, measurements, and standards. The latest version of the "NIST Definition of Cloud Computing" is available online, but it can be summarized as shown in Figure 1.

What's driving Cloud Computing today?
As the world gradually pulls itself out of recession, companies are starting to implement "return to growth" strategies, which means growing revenue rather than trying to cut their way to profitability. An essential part of this is to re-connect with customers by aligning their marketing and sales channels with the ways that their customers want to evaluate and purchase their products. As they do this, an undeniable truth emerges: The way that customers expect to interact with them has undergone a fundamental and permanent change, and on a scale and pace that has never been seen before. Businesses find a whole new generation of customers who are impatient, unencumbered with antiquated notions such as brand loyalty, and who expect things to work the way that they want them work. Customers now demand speed, immediacy, and ease-of-use. They expect to be able to do business with you wherever and whenever they want, and on whatever device they choose. The new customer experience benchmarks are Facebook, YouTube and iTunes, and if you can't provide that quality of experience, they'll simply find someone who can.

So when businesses turn their attention from survival mode to growth mode, they quickly realize that "reconnecting with the customer" is not a return to business as usual, but something that requires a complete rethink of the way they work, both externally and internally.

The new business imperatives for customer interaction are agility to meet rapidly changing market conditions, flexibility in the way that they do business and rapid time-to-value as trends are increasingly measured in days and weeks - not months and years. Inside organizations, new tools and business processes are needed to manage new ways to create demand, manage new distribution channels, communicate value to customers and provide visibility on rapidly changing customer trends.

Given the huge amount of publicity, it is inevitable that the CEO will hear or read the pitch that "Cloud provides agility, flexibility, and quicker time-to-value," and get hooked. What's keeping them awake at night is the need for a fundamental change in the way that they interact with customers, and the answer is right there in front of them - Cloud Computing. It's exactly what they need to immediately start challenging the IT department to develop a cloud strategy. As proof of this, a recent survey conducted by the 451 Group in June 2010 confirms that it is CEOs, not CIOs, who are driving Cloud Computing initiatives in most organizations.

The answer to the question - what's driving cloud computing - is very clear: Business Needs. Led by the CEO, the primary driver of cloud computing inside most organizations today is the line of business, where it's seen as an essential component of a "return to growth" strategy. Reducing cost, the primary focus for the last few years, is still important, but it's no longer one of the top priority items in an increasing number of company budgets today.

What does this mean to IT?
IT's traditional reaction to pressures from the business and customers, especially in larger enterprises, is to comprehend new requirements in the rolling three-year or five-year strategic IT plan. After all, building a new sales force automation or customer relationship management solution takes time - there are the RFI and RFQ processes to go through, detailed ROI calculations, budget approval cycles and extensive/detailed vendor contract negotiations. Once that's all done, the lengthy implementation phase can begin, where the chosen solution is customized (sometimes extensively) to fit the company's systems.

For most businesses, this process is a frustrating "take it or leave it" approach driven by IT, executed at IT's pace, and riddled with delays and cost overruns. What's more, it's completely inconsistent with the customer-facing and internally facing imperatives that the CEO and business leaders are now grappling with, in a fashion that IT cannot continue to operate in.

Cloud Computing offers a compelling alternative to the old way of providing IT services. Instead of internally developed monolithic systems, with lengthy and costly implementations of customized third-party business solutions, Cloud Computing provides an agile and flexible environment with shorter solution implementation cycles at a much lower cost. It represents a fundamental shift in the way that enterprises acquire and implement new IT functionality (computing power, storage, software, etc.) to support customer and organizational needs. In short, Cloud Computing offers IT a new way of implementing the functionality that the business units are demanding, and at a speed and cost that meets their expectations.

What this means to IT is that they are facing a critical choice that has to be made soon - either "Do nothing" or "Lead from the front." If they do nothing, business units have a choice now and they'll turn to any of the hundreds of SaaS vendors that can deliver 95 percent of the new functionality they need. These "fly under the corporate IT radar" solutions can be delivered as fast as it takes them to enter their credit card information, so they can have a great Salesforce Automation solution today with no commitment, no delay, and no IT.

"Leading from the front" is the only right course. IT owns IT, regardless of whether it comes from inside or outside the organization. Along with delivering completely new applications to the business, Cloud Computing will allow IT to enhance the functionality of existing applications by leveraging content and services from third-party providers. These "borderless" applications offer a best-of-both-worlds approach - the existing investments in legacy applications and the "systems of record" are protected, and new functionality to meet new needs can be delivered quickly and at a low initial cost.

What are the risks, and how can IT mitigate them?
From a line-of-business perspective, Cloud Computing is raising expectations on how quickly and cost-effectively new IT functionality can be made available to them. More important, even though the delivery chain for these "borderless applications" now crosses organizational and geographic boundaries, users will still expect the applications to perform well, and will hold IT accountable if they don't.

The bottom line is that IT has to meet the business' expectation of faster delivery of new functionality and good performance, while at the same time addressing two key risks: ensuring that sensitive data remains protected in compliance with company policy and state/federal legislation; and maintaining end-to-end visibility and control of service performance and availability of borderless applications.

For many IT organizations, data security was a "show stopper" for adopting Cloud Computing, especially for applications in public clouds, simply because there were no existing solutions that addressed the unique security issues posed by the cloud. However, a new generation of security products from industry leaders such as Symantec, McAfee (to be acquired by Intel,) and Covisint is changing the security landscape. When combined with best practices from industry analysts such as Gartner, the issues are being effectively addressed for an increasing number of companies, including those in heavily regulated industries. Security is, and always will be, a critical issue whether companies are "in the cloud" or not, but it is no longer necessarily a show stopper.

Performance and Availability
From an end-user perspective, poor performance or non-availability of an application looks exactly the same, regardless of where the problem is in the service delivery chain - the service provider, in the data center, across the network, in the enterprise or with the end user's own device - and has exactly the same productivity impact to the business. Rapid resolution of the problem requires end-to-end visibility of the entire service delivery chain to isolate and fix the problem. The problem for many organizations is that the current generation of Application Performance Management (APM) solutions in use from most vendors fails to meet that challenge because they address data center or Internet performance issues in a narrow, compartmentalized view.

Recent experience by companies who have actually implemented cloud solutions paints an interesting picture of where IT should be focusing its risk mitigation efforts to ensure that cloud delivers real business benefits. Prior to implementation, many IT departments were unconvinced that Cloud Computing would deliver the promised business agility and flexibility benefits, and believed that the big win would be cost savings. They also believed that security concerns would tower above everything else as the number one unresolved problem, and that application performance and service level management problems would be solved by simply extending the capabilities of their existing APM solutions. Practical experience was quite different. Agility and flexibility turned out to be the number one win by a long shot; and performance and availability turned out to be tough problems that couldn't be effectively solved with their existing or planned APM solutions.

What's so hard about managing performance in the cloud?
Service providers are typically unwilling to commit to specific service level agreements; and for those that do, there is a lot of inconsistency - and confusion - in their definitions of performance and availability. Amazon for instance currently quotes availability in terms of "outages" - periods of five minutes or more during the service year in which Amazon EC2 was in the state of "region unavailable." Others prefer to quote more general statistics such as "multiple redundant gigabit Internet connections" and "greater than 99.95% service availability." To put these figures into context, a 99.95% availability means that unplanned downtime of a cloud-based service will average no more than 12 minutes per month. Compare this with about 95 minutes per month of downtime for the average exchange server, and the initial reaction is that there's no need to worry about performance and availability. However, this is a very dangerous assumption, since it ignores a critically important point: the service provider is just one part of the application delivery chain.

From an end-user perspective, poor performance or non-availability of an application looks exactly the same, regardless of where the problem actually is in the application delivery chain - one of the service providers, in the data center, across the network, in the enterprise, in the cloud or with the end user's own device - and has exactly the same productivity impact to the business. For example, a mortgage loan pre-approval application that utilizes cloud-based services, what happens if one of the services performs badly or is not available at all? How can the enterprise determine if it's a service provider problem, a network problem or an end-user device problem?

To further complicate things, geographic location can also have a dramatic impact on the overall performance of a cloud-based application - this is somewhat contrary to the popular belief that Internet communication is virtually instantaneous. A worse-case scenario is that "all lights are green" in the data center, but some (not all) customers are complaining about performance issues. Without detailed fault-domain information across the entire delivery chain, it is virtually impossible to isolate and fix performance and availability issues in a timely manner, before they start to impact users.

CloudSleuth Web Portal: The Compuware-sponsored CloudSleuth community web portal is designed to meet the growing need for authoritative, objective measurements of cloud service providers. It provides free access to real-time performance and availability visualizations of leading cloud providers around the world, plus other valuable data such as blogs, forums and white papers - all focused on best practices for building, deploying, and managing cloud-based applications.

To illustrate how all the components of the delivery chain can impact performance of a web-based application, Figures 5 and 6 show actual measurements from CloudSleuth, a Compuware-sponsored web portal that provides real-time visualizations of the performance and availability of cloud service providers. CloudSleuth measures performance of a simple application (no I/O- or CPU-intensive tasks) deployed anonymously at a number of cloud services providers. The Gomez Performance Network is then used to access those applications from backbone and "Last Mile" locations around the world to provide actual performance results. All the tests below use only Amazon EC2 East and West.

"Figure 5: Last Mile" Internet Service Provider (ISP) Performance: This test shows how the response time is impacted by the performance of the user's ISP (the so-called "last mile" connection).

Note that users in Wyoming are experiencing performance issues because of "last mile" connectivity problems, not because of Amazon.

Figure 6: Geography: The graphs clearly show that the farther away the user is from the application, the longer the response time.

This test also illustrates that if enterprises have a choice of service providers, it is best to choose one that is nearest to their user and/or customer base.

Figure 7: Time of Day: This test shows that the performance of cloud service providers is not constant, but can vary quite widely throughout the day. This is generally because the service provider is handling a varying load from other users on their systems.

Figure 7 also illustrates another practical point about cloud performance: The cloud theoretically provides "rapid elasticity," meaning that wide variations in load can be accommodated without significantly impacting the performance of individual applications. In reality, cloud service providers have to live by the same rules of economics as everyone else - they do not have banks of servers lying idle to cope with these peaks in demand. Although applications operate in their own "instances" at the service provider, their performance is affected by what their neighbors are doing!

Conclusion - Putting It All Together
Cloud Computing represents the new way for businesses to re-connect with their customers. It allows IT to meet the business need for agility, flexibility and time-to-value - all of these are vital to success in the new, customer-driven world where "work anywhere is what we do." But despite the increasingly proven business benefits, Cloud Computing introduces new business risks, and IT must play a leadership role in addressing those risks.

A key concern is managing the end-user experience of cloud-based applications by maintain complete visibility of the performance of these new borderless applications. Fred Smith, the founder of FedEx, once remarked: "Information about the package is as important as the package itself." He was making the case that it's not enough to provide a general statement of service quality; you must be able to present information on the particular service you are delivering to a particular customer at a particular time, regardless of where the package is. The same is true for borderless applications - these require a solution that can monitor and manage application performance regardless of physical, virtual or cloud attributes.

Traditional enterprise application performance management tools are unsuited to the task of managing this new generation of applications, because they only provide narrow, technology-centric keyhole views into the performance of specific components or processes. The only way to truly solve performance and availability problems is through a holistic view of application performance that encompasses the entire application delivery chain.

End-to-End Visibility Across The Application Delivery Chain

1. The NIST Definition of Cloud Computing:

More Stories By Richard Stone

Richard Stone is Senior Solution Manager at Compuware, responsible for Cloud-based Application Performance Management solutions.

Prior to joining Compuware, Richard has held senior marketing and product management positions at Hewlett Packard, Compaq, plus a number of other US and European IT companies. He has extensive experience in cloud-based solutions and technologies, and has brought a number of cloud-based solutions to market: These include cross-industry solutions such as E-Mail, Web Conferencing, and Sales Force Automation; and vertical market solutions in industries such as Insurance, Retail Banking, and Telecommunications. His domain expertise also includes mobile computing, security, compliance, and high-availability solutions for all market segments (SMB, Enterprise, and key verticals such as Finance, Government, Healthcare, and Retail.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@ThingsExpo Stories
As more intelligent IoT applications shift into gear, they’re merging into the ever-increasing traffic flow of the Internet. It won’t be long before we experience bottlenecks, as IoT traffic peaks during rush hours. Organizations that are unprepared will find themselves by the side of the road unable to cross back into the fast lane. As billions of new devices begin to communicate and exchange data – will your infrastructure be scalable enough to handle this new interconnected world?
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal an...
Developing software for the Internet of Things (IoT) comes with its own set of challenges. Security, privacy, and unified standards are a few key issues. In addition, each IoT product is comprised of at least three separate application components: the software embedded in the device, the backend big-data service, and the mobile application for the end user's controls. Each component is developed by a different team, using different technologies and practices, and deployed to a different stack/target - this makes the integration of these separate pipelines and the coordination of software upd...
As a company adopts a DevOps approach to software development, what are key things that both the Dev and Ops side of the business must keep in mind to ensure effective continuous delivery? In his session at DevOps Summit, Mark Hydar, Head of DevOps, Ericsson TV Platforms, will share best practices and provide helpful tips for Ops teams to adopt an open line of communication with the development side of the house to ensure success between the two sides.
The IoT is upon us, but today’s databases, built on 30-year-old math, require multiple platforms to create a single solution. Data demands of the IoT require Big Data systems that can handle ingest, transactions and analytics concurrently adapting to varied situations as they occur, with speed at scale. In his session at @ThingsExpo, Chad Jones, chief strategy officer at Deep Information Sciences, will look differently at IoT data so enterprises can fully leverage their IoT potential. He’ll share tips on how to speed up business initiatives, harness Big Data and remain one step ahead by apply...
There will be 20 billion IoT devices connected to the Internet soon. What if we could control these devices with our voice, mind, or gestures? What if we could teach these devices how to talk to each other? What if these devices could learn how to interact with us (and each other) to make our lives better? What if Jarvis was real? How can I gain these super powers? In his session at 17th Cloud Expo, Chris Matthieu, co-founder and CTO of Octoblu, will show you!
Today air travel is a minefield of delays, hassles and customer disappointment. Airlines struggle to revitalize the experience. GE and M2Mi will demonstrate practical examples of how IoT solutions are helping airlines bring back personalization, reduce trip time and improve reliability. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Dr. Sarah Cooper, M2Mi's VP Business Development and Engineering, will explore the IoT cloud-based platform technologies driving this change including privacy controls, data transparency and integration of real time context w...
The Internet of Everything is re-shaping technology trends–moving away from “request/response” architecture to an “always-on” Streaming Web where data is in constant motion and secure, reliable communication is an absolute necessity. As more and more THINGS go online, the challenges that developers will need to address will only increase exponentially. In his session at @ThingsExpo, Todd Greene, Founder & CEO of PubNub, will explore the current state of IoT connectivity and review key trends and technology requirements that will drive the Internet of Things from hype to reality.
SYS-CON Events announced today that Sandy Carter, IBM General Manager Cloud Ecosystem and Developers, and a Social Business Evangelist, will keynote at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in high-performance, high-efficiency server, storage technology and green computing, will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions® for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermi...
"Matrix is an ambitious open standard and implementation that's set up to break down the fragmentation problems that exist in IP messaging and VoIP communication," explained John Woolf, Technical Evangelist at Matrix, in this interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Nowadays, a large number of sensors and devices are connected to the network. Leading-edge IoT technologies integrate various types of sensor data to create a new value for several business decision scenarios. The transparent cloud is a model of a new IoT emergence service platform. Many service providers store and access various types of sensor data in order to create and find out new business values by integrating such data.
There are so many tools and techniques for data analytics that even for a data scientist the choices, possible systems, and even the types of data can be daunting. In his session at @ThingsExpo, Chris Harrold, Global CTO for Big Data Solutions for EMC Corporation, will show how to perform a simple, but meaningful analysis of social sentiment data using freely available tools that take only minutes to download and install. Participants will get the download information, scripts, and complete end-to-end walkthrough of the analysis from start to finish. Participants will also be given the pract...
Too often with compelling new technologies market participants become overly enamored with that attractiveness of the technology and neglect underlying business drivers. This tendency, what some call the “newest shiny object syndrome,” is understandable given that virtually all of us are heavily engaged in technology. But it is also mistaken. Without concrete business cases driving its deployment, IoT, like many other technologies before it, will fade into obscurity.
WebRTC services have already permeated corporate communications in the form of videoconferencing solutions. However, WebRTC has the potential of going beyond and catalyzing a new class of services providing more than calls with capabilities such as mass-scale real-time media broadcasting, enriched and augmented video, person-to-machine and machine-to-machine communications. In his session at @ThingsExpo, Luis Lopez, CEO of Kurento, will introduce the technologies required for implementing these ideas and some early experiments performed in the Kurento open source software community in areas ...
Electric power utilities face relentless pressure on their financial performance, and reducing distribution grid losses is one of the last untapped opportunities to meet their business goals. Combining IoT-enabled sensors and cloud-based data analytics, utilities now are able to find, quantify and reduce losses faster – and with a smaller IT footprint. Solutions exist using Internet-enabled sensors deployed temporarily at strategic locations within the distribution grid to measure actual line loads.
“In the past year we've seen a lot of stabilization of WebRTC. You can now use it in production with a far greater degree of certainty. A lot of the real developments in the past year have been in things like the data channel, which will enable a whole new type of application," explained Peter Dunkley, Technical Director at Acision, in this interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Today’s connected world is moving from devices towards things, what this means is that by using increasingly low cost sensors embedded in devices we can create many new use cases. These span across use cases in cities, vehicles, home, offices, factories, retail environments, worksites, health, logistics, and health. These use cases rely on ubiquitous connectivity and generate massive amounts of data at scale. These technologies enable new business opportunities, ways to optimize and automate, along with new ways to engage with users.
Through WebRTC, audio and video communications are being embedded more easily than ever into applications, helping carriers, enterprises and independent software vendors deliver greater functionality to their end users. With today’s business world increasingly focused on outcomes, users’ growing calls for ease of use, and businesses craving smarter, tighter integration, what’s the next step in delivering a richer, more immersive experience? That richer, more fully integrated experience comes about through a Communications Platform as a Service which allows for messaging, screen sharing, video...
WebRTC converts the entire network into a ubiquitous communications cloud thereby connecting anytime, anywhere through any point. In his session at WebRTC Summit,, Mark Castleman, EIR at Bell Labs and Head of Future X Labs, will discuss how the transformational nature of communications is achieved through the democratizing force of WebRTC. WebRTC is doing for voice what HTML did for web content.