Click here to close now.


Agile Computing Authors: Yeshim Deniz, Anders Wallgren, Elizabeth White, SmartBear Blog, Liz McMillan

Related Topics: @CloudExpo

@CloudExpo: Article

Cloud Computing: Creating a Generic (Internal) Cloud Architecture

Do Cloud-like architectures have to remain external to the enterprise? No.

Kenneth Oestriech's Blog

I've been taken aback lately by the tacit assumption that cloud-like (IaaS and PaaS) services have to be provided by folks like Amazon, Terremark and others. It's as if these providers do some black magic that enterprises can't touch or replicate. However, history has taught the IT industry that what starts in the external domain eventually makes its way into the enterprise, and vice-versa.

I've been taken aback lately by the tacit assumption that cloud-like (IaaS and PaaS) services have to be provided by folks like Amazon, Terremark and others. It's as if these providers do some black magic that enterprises can't touch or replicate.

However, history's taught the IT industry that what starts in the external domain eventually makes its way into the enterprise, and vice-versa. Consider Google beginning with internet search, and later offering an enterprise search appliance. Then, there's the reverse: An application, say a CRM system, leaves the enterprise to be hosted externally as SaaS, such as But even in this case, the first example then recurs -- as begins providing internal appliances back to its large enterprise customers!

I am simply trying to challenge the belief that cloud-like architectures have to remain external to the enterprise. They don't. I believe it's inevitable that they will soon find their way into the enterprise, and become a revolutionary paradigm of how *internal* IT infrastructure is operated and managed.

With each IT management conversation I've had, the concept that I recently put forward is becoming clearer and more inevitable. That an "internal cloud" (call it a cloud architecture or utility computing) will penetrate enterprise datacenters.

Limitations of "external" cloud computing architectures

Already, a number of authorities have pretty clearly outlined the pros and cons of using external service providers as "cloud" providers. For reference, there is the excellent "10 reasons enterprises aren't ready to trust the cloud" by Stacey Higginbotham of GigaOM, as well as a piece by Mike Walker of MSDN regarding "Challenges of moving to the cloud”. So it stands that innovation will work around these limitations, borrowing from the positive aspects of external service providers, omitting the negatives, and offering the result to IT Ops.

Is an "internal" cloud architecture possible and repeatable?

So here is my main thesis: that there are software IT management products available today (and more to come) that will operate *existing* infrastructure in a manner identical to the operation of IaaS and PaaS. Let me say that again -- you don't have to outsource to an "external" cloud provider as long as you already own legacy infrastructure that can be re-purposed for this new architecture.

This statement -- and associated enabling software technologies -- is beginning to spell the beginning of the final commoditization of compute hardware. (BTW, I find it amazing that some vendors continue to tout that their hardware is optimized for cloud computing. That is a real oxymoron)

As time passes, cloud-computing infrastructures (ok, Utility Computing architectures if you must) coupled with the trend toward architecture standardization, will continue to push the importance of specialized HW out of the picture. Hardware margins will continue to be squeezed. (BTW, you can read about the "cheap revolution" in Forbes, featuring our CEO Bill Coleman).

As the VINF blog also observed, regarding cloud-based architectures:

You can build your own cloud, and be choosy about what you give to others. Building your own cloud makes a lot of sense, it’s not always cheap but its the kind of thing you can scale up (or down..) with a bit of up-front investment, in this article I’ll look at some of the practical; and more infrastructure focused ways in which you can do so.

Your “cloud platform” is essentially an internal shared services system where you can actually and practically implement a “platform” team that operates and capacity plans for the cloud platform; they manage its availability and maintenance day-day and expansion/contraction.
Even back in February, Mike Nygard observed reasons and benefits for this trend:
Why should a company build its own cloud, instead of going to one of the providers?

On the positive side, an IT manager running a cloud can finally do real chargebacks to the business units that drive demand. Some do today, but on a larger-grained level... whole servers. With a private cloud, the IT manager could charge by the compute-hour, or by the megabit of bandwidth. He could charge for storage by the gigabyte, and with tiered rates for different availability/continuity guarantees. Even better, he could allow the business units to do the kind of self-service that I can do today with a credit card and The Planet. (OK, The Planet isn't a cloud provider, but I bet they're thinking about it. Plus, I like them.)
We are seeing the beginning of an inflection point in the way IT is managed, brought on by (1) the interest (though not yet adoption) of cloud architectures, (2) the increasing willingness to accept shared IT assets (thanks to VMware and others), and (3) the budding availability of software that allows “cloud-like” operation of existing infrastructure, but in a whole new way.

How might these "internal clouds" first be used?

Let's be real: there are precious few green-field opportunities where enterprises will simply decide to change their entire IT architecture and operations into this "internal cloud" -- i.e. implement a Utility Computing model out-of-the-gate. But there are some interesting starting points that are beginning to emerge:

  • Creating a single-service utility: by this mean that an entire service tier (such as a web farm, application server farm, etc.) moves to being managed in a "cloud" infrastructure, where resources ebb-and-flow as needed by user demand.
  • Power-managing servers: using utility computing IT management automation to control power states of machines that are temporarily idle, but NOT actually dynamically provisioning software onto servers. Firms are getting used to the idea of using policy-governed control to save on IT power consumption as they get comfortable with utility-computing principles. They can then selectively activate the dynamic provisioning features as they see fit.
  • Using utility computing management/automation to govern virtualized environments: it's clear that once firms virtualize/consolidate, they later realize that there are more objects to manage (virtual sprawl) , rather than fewer; plus, they've created "virtual silos", distinct from the non-virtualized infrastructure they own. Firms will migrate toward an automated management approach to virtualization where -- on the fly -- applications are virtualized, hosts are created, apps are deployed/scaled, failed hosts are automatically re-created, etc. etc. Essentially a services cloud.

It is inevitable that the simplicity, economics, and scalability of externally-provided "clouds" will make their way into the enterprise. The question isn't if, but when.

More Stories By Kenneth Oestreich

Ken Oestreich is VP of Product Marketing with Egenera, and has spent over 20 years developing and introducing new products into new markets. Recently, he’s been involved in bringing utility- and cloud-computing technologies to market. Previously, Ken was with Cassatt, and held a number of developer, alliance and strategy positions with Sun Microsystems.

Comments (2) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

Most Recent Comments
sajai krishnan 08/26/08 09:53:44 PM EDT

Very much on topic. In our parallel area around cloud storage we see interest in internal/private storage clouds as much as with external/public storage clouds. Bandwidth, security are clearly reasons to go with a private cloud, whereas getting offsite copies is certainly one reason to consider a public cloud. There is the additional reason that by building your own storage cloud you can tune the performance characteristics of your cloud by having, for example, beefy, hi-performing nodes for streaming or inexpensive nodes with a lot of disks for archival applications.

As for service providers - I think we will see service providers delivering the typical public service like S3, but could also provide "insourcing" services ... i.e. a service provider managing an dedicated internal cloud for Fortune100 data center in a colo model. I think AT&T's recent Synaptic Hosting is probably headed in that direction.

There are a few different ways to skin this cat in terms of implementation. The key is that the technology matures, and customers get familiar with the commodity scale-out economics, and easy management model that is at the core of this approach.

Sajai Krishnan, CEO ParaScale

amuletc 08/25/08 08:14:58 PM EDT

By Dan D. Gutierrez
CEO of

I really like your concept of an "internal cloud"! When my firm launched the web's first Database-as-a-Service offering in 1999, we had a sales option to create a special instance of our product for an enterprise that wanted the convenience of SaaS, but was concerned about privacy and security issues. Bringing in our service as an internal cloud solved these issues. Fast forward nearly 10 years, it is great to see this concept mentioned in this timely article.

@ThingsExpo Stories
Today air travel is a minefield of delays, hassles and customer disappointment. Airlines struggle to revitalize the experience. GE and M2Mi will demonstrate practical examples of how IoT solutions are helping airlines bring back personalization, reduce trip time and improve reliability. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Dr. Sarah Cooper, M2Mi's VP Business Development and Engineering, will explore the IoT cloud-based platform technologies driving this change including privacy controls, data transparency and integration of real time context w...
There will be 20 billion IoT devices connected to the Internet soon. What if we could control these devices with our voice, mind, or gestures? What if we could teach these devices how to talk to each other? What if these devices could learn how to interact with us (and each other) to make our lives better? What if Jarvis was real? How can I gain these super powers? In his session at 17th Cloud Expo, Chris Matthieu, co-founder and CTO of Octoblu, will show you!
Developing software for the Internet of Things (IoT) comes with its own set of challenges. Security, privacy, and unified standards are a few key issues. In addition, each IoT product is comprised of at least three separate application components: the software embedded in the device, the backend big-data service, and the mobile application for the end user's controls. Each component is developed by a different team, using different technologies and practices, and deployed to a different stack/target - this makes the integration of these separate pipelines and the coordination of software upd...
As a company adopts a DevOps approach to software development, what are key things that both the Dev and Ops side of the business must keep in mind to ensure effective continuous delivery? In his session at DevOps Summit, Mark Hydar, Head of DevOps, Ericsson TV Platforms, will share best practices and provide helpful tips for Ops teams to adopt an open line of communication with the development side of the house to ensure success between the two sides.
Mobile messaging has been a popular communication channel for more than 20 years. Finnish engineer Matti Makkonen invented the idea for SMS (Short Message Service) in 1984, making his vision a reality on December 3, 1992 by sending the first message ("Happy Christmas") from a PC to a cell phone. Since then, the technology has evolved immensely, from both a technology standpoint, and in our everyday uses for it. Originally used for person-to-person (P2P) communication, i.e., Sally sends a text message to Betty – mobile messaging now offers tremendous value to businesses for customer and empl...
SYS-CON Events announced today that Sandy Carter, IBM General Manager Cloud Ecosystem and Developers, and a Social Business Evangelist, will keynote at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
The IoT market is on track to hit $7.1 trillion in 2020. The reality is that only a handful of companies are ready for this massive demand. There are a lot of barriers, paint points, traps, and hidden roadblocks. How can we deal with these issues and challenges? The paradigm has changed. Old-style ad-hoc trial-and-error ways will certainly lead you to the dead end. What is mandatory is an overarching and adaptive approach to effectively handle the rapid changes and exponential growth.
The IoT is upon us, but today’s databases, built on 30-year-old math, require multiple platforms to create a single solution. Data demands of the IoT require Big Data systems that can handle ingest, transactions and analytics concurrently adapting to varied situations as they occur, with speed at scale. In his session at @ThingsExpo, Chad Jones, chief strategy officer at Deep Information Sciences, will look differently at IoT data so enterprises can fully leverage their IoT potential. He’ll share tips on how to speed up business initiatives, harness Big Data and remain one step ahead by apply...
WebRTC converts the entire network into a ubiquitous communications cloud thereby connecting anytime, anywhere through any point. In his session at WebRTC Summit,, Mark Castleman, EIR at Bell Labs and Head of Future X Labs, will discuss how the transformational nature of communications is achieved through the democratizing force of WebRTC. WebRTC is doing for voice what HTML did for web content.
"Matrix is an ambitious open standard and implementation that's set up to break down the fragmentation problems that exist in IP messaging and VoIP communication," explained John Woolf, Technical Evangelist at Matrix, in this interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
WebRTC has had a real tough three or four years, and so have those working with it. Only a few short years ago, the development world were excited about WebRTC and proclaiming how awesome it was. You might have played with the technology a couple of years ago, only to find the extra infrastructure requirements were painful to implement and poorly documented. This probably left a bitter taste in your mouth, especially when things went wrong.
Nowadays, a large number of sensors and devices are connected to the network. Leading-edge IoT technologies integrate various types of sensor data to create a new value for several business decision scenarios. The transparent cloud is a model of a new IoT emergence service platform. Many service providers store and access various types of sensor data in order to create and find out new business values by integrating such data.
The broad selection of hardware, the rapid evolution of operating systems and the time-to-market for mobile apps has been so rapid that new challenges for developers and engineers arise every day. Security, testing, hosting, and other metrics have to be considered through the process. In his session at Big Data Expo, Walter Maguire, Chief Field Technologist, HP Big Data Group, at Hewlett-Packard, will discuss the challenges faced by developers and a composite Big Data applications builder, focusing on how to help solve the problems that developers are continuously battling.
There are so many tools and techniques for data analytics that even for a data scientist the choices, possible systems, and even the types of data can be daunting. In his session at @ThingsExpo, Chris Harrold, Global CTO for Big Data Solutions for EMC Corporation, will show how to perform a simple, but meaningful analysis of social sentiment data using freely available tools that take only minutes to download and install. Participants will get the download information, scripts, and complete end-to-end walkthrough of the analysis from start to finish. Participants will also be given the pract...
WebRTC: together these advances have created a perfect storm of technologies that are disrupting and transforming classic communications models and ecosystems. In his session at WebRTC Summit, Cary Bran, VP of Innovation and New Ventures at Plantronics and PLT Labs, will provide an overview of this technological shift, including associated business and consumer communications impacts, and opportunities it may enable, complement or entirely transform.
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet conditions, Dyn ensures traffic gets delivered faster, safer, and more reliably than ever.
WebRTC services have already permeated corporate communications in the form of videoconferencing solutions. However, WebRTC has the potential of going beyond and catalyzing a new class of services providing more than calls with capabilities such as mass-scale real-time media broadcasting, enriched and augmented video, person-to-machine and machine-to-machine communications. In his session at @ThingsExpo, Luis Lopez, CEO of Kurento, will introduce the technologies required for implementing these ideas and some early experiments performed in the Kurento open source software community in areas ...
Too often with compelling new technologies market participants become overly enamored with that attractiveness of the technology and neglect underlying business drivers. This tendency, what some call the “newest shiny object syndrome,” is understandable given that virtually all of us are heavily engaged in technology. But it is also mistaken. Without concrete business cases driving its deployment, IoT, like many other technologies before it, will fade into obscurity.
Who are you? How do you introduce yourself? Do you use a name, or do you greet a friend by the last four digits of his social security number? Assuming you don’t, why are we content to associate our identity with 10 random digits assigned by our phone company? Identity is an issue that affects everyone, but as individuals we don’t spend a lot of time thinking about it. In his session at @ThingsExpo, Ben Klang, Founder & President of Mojo Lingo, will discuss the impact of technology on identity. Should we federate, or not? How should identity be secured? Who owns the identity? How is identity ...
The buzz continues for cloud, data analytics and the Internet of Things (IoT) and their collective impact across all industries. But a new conversation is emerging - how do companies use industry disruption and technology enablers to lead in markets undergoing change, uncertainty and ambiguity? Organizations of all sizes need to evolve and transform, often under massive pressure, as industry lines blur and merge and traditional business models are assaulted and turned upside down. In this new data-driven world, marketplaces reign supreme while interoperability, APIs and applications deliver un...