|By Alex Givens||
|February 25, 2009 03:17 AM EST||
Enterprises committed to a virtualization strategy need to ensure that management and automation of mission-critical IT systems and applications are included in their planning. Enterprises also need to establish procedures that allow them to maximize the benefits of consolidating to a virtualized platform and mitigate potential business risk across a landscape that has become abstract. Failure to do so will impact the success of projects and dilute the value of a virtualization strategy.
Spiraling energy costs, squeezing extra IT power out of fixed data center real estate footprints and environmental concerns, have shifted virtualization from a commodity tool to a center-stage role in the IT strategy of many organizations.
The history of virtualization can be tracked back to the 1970s when mainframe computers could be virtually partitioned to host multiple guest machines. It proved an ideal environment in which to install and configure new operating platforms, upgrade existing systems, and give software developers a sandbox for isolation testing. In its 21st century incarnation, history has repeated itself with virtualization usually starting life deep within the data center of most enterprises. IT operations and application development teams rapidly recognized the extra flexibility they could get from not needing to procure extra hardware to service ad hoc processing demands or for software testing.
With the shift from commodity to a center-stage role for virtualization, there is a corresponding shift in planning required to ensure that all IT layers in an enterprise are fully aligned to perform in a new virtualized landscape. In addition to ensuring that the underlying IT infrastructure components are in place each time a new virtual machine is provisioned, it's imperative that the business applications as well as the operational processes and procedures are fully established to provide the comprehensive set of services that end users rely on to do their jobs.
From an end-user or functional user perspective, whether an environment is virtualized or not is largely irrelevant. Such users simply expect their applications and programs to work - virtualization for them is a back-office, and therefore mostly unseen, technology. Planning for virtualization should strive to minimize apparent adverse impact on users' day-to-day activities.
Virtualization transforms a data center into a dynamic IT environment that can provide the flexibility and scalability capable of responding to the varying demands driven by a dynamic 24x7 global marketplace. However, while the ability to add and subtract processing capacity without needing to power up extra hardware offers enterprises greater agility, there are accompanying challenges that require addressing.
An organization's current system monitoring tools are probably very good at monitoring server statistics (like CPU utilization, I/O, etc.) and raising alarms if certain thresholds are exceeded. In a virtualized environment, such alarms should be expected to initiate action that can start, stop, or move virtual machines within the environment to help alleviate the detected resource exception. Planning should consider how system monitors can take actions that modify the virtual environment.
As each new virtual machine is spawned, the IT Operations team is left with the challenge of recognizing that there is an extra machine available that requires managing and monitoring. This same team also assumes responsibility for manually routing workload to this additional resource, continually checking systems performance and being ready to respond to messages and resolve problems as and when they occur.
A long-running, complex business process is known to contain a large processing "spike" at a certain point. In a virtualized environment, additional virtual machines can be started just prior to the spike (and stopped just after) to provide additional processing horsepower. The orchestrator (personnel or product) of the business process should be expected to be sufficiently aware of the virtualized environment to note the additional virtual machine(s) and take advantage of them. Without that awareness, even with the flexibility to dynamically add horsepower, an important potential benefit of the virtualized environment is lost. Planning should look at how business process orchestrators can take actions that affect the virtual environment.
This increase in workload combined with the perennial lack of qualified, skilled personnel puts tremendous pressure on IT operations. Instead of continually trying to find, train, and retain staff, organizations need to incorporate the tribal operations management knowledge that has accumulated over many years into the fabric of their virtualized environments. Adopting an automated approach would not only reduce operational pressures; it would also mitigate business risk by reducing the exposure of critical systems and applications to unaccountable manual intervention.
Drilling down into the previous example - if personnel are responsible for orchestrating the business process, one can envision a very detailed and carefully written manual process document for them to follow to manage the spike, taking advantage of the established virtualized environment. The burden (what higher-value activity could a person be doing?) and risk (what if a person makes a mistake?) of such a manual procedure could be eliminated by using an automated orchestrator - but only so far as the orchestrator is aware of and can interact with and control the virtualized environment. Again, without the awareness, an important potential benefit of the virtualized environment is lost. Planning should work to convert or translate manual processes (to the greatest extent possible) into automated processes.
Ensuring that extra virtual machines are brought online to cater for peak processing demands, optimizing the distribution of batch jobs to complete ahead of critical deadlines through to automatically responding and taking corrective actions against errors are just a few examples of workload management challenges arising in a virtualized world that can be simplified using automation. Beyond the infrastructure layer there's an equivalent set of tasks and procedures that have to be done to drive application processing that have traditionally relied on manual interaction, either by data center or end-user personnel. The virtualization of applications generates a similar set of challenges and requires equal attention if enterprises are going to realize benefits throughout their IT landscape.
In virtualized environments, the fixed relationships between hardware, systems, and applications no longer exist. Hardwired, proscribed associations, ranging from a command sequence in an operations handbook to fixed parameters embedded in a piece of application code, can result in different interpretations when presented in a virtualized world. Virtualization introduces an extra layer of abstraction between physical hardware devices and the software systems that an enterprise runs to support its business.
It's easy for a developer to write a program that runs well on a single server. However, without due consideration of the virtualized environment, it's all too likely that that same program won't run successfully across a landscape of virtual machines or hypervisors. Support for virtualized environments must be built into custom-developed code.
At the IT infrastructure management layer, there are IT housekeeping and administrative tasks that need to be executed: backups, snapshots, database clean-ups, file-transfer handling, and starting and stopping VMs. At the business application layer, there are functional processes and procedures that need to be undertaken: sales data uploads, order processing, invoicing, logistics, production, analytics and forecasting, finance and accounting, HR and customer care. Bringing together the execution of these activities ensures that everything around business and IT processes are properly managed and maintained. The scope of activities required will usually go well beyond the capability of an individual business application or systems management solution. Enterprises need to manage the suite of all interfaces around their virtual environments. They also need to be able to integrate the real and virtual environments in such a way that they can fully leverage the breadth and the depth of functionality that can be derived from their core applications and operating platforms.
IT housekeeping and administrative applications certainly must be "virtualization-aware" - indeed, some of the IT housekeeping tasks listed above are included in various hypervisors (e.g., snapshots). Business applications such as ERP, CRM, BI and DW must also be aware - it would make no sense to bring another virtual machine online for a particular application if the application itself had no awareness of its virtualized environment. There's some opportunity for application consolidation in terms of the applications used for managing IT housekeeping, administration, and business applications. The distinctions have blurred between certain classes of applications (e.g., job schedulers, system managers, business process managers) to such a degree that one new application may be able to replace the functionality of two or more older applications (see the references to an "orchestrator" in other parts of this article). Planning must include the business applications and each one's unique requirements.
Forming logical associations and utilizing logical views when managing virtualized systems and applications will allow IT departments to achieve greater flexibility and agility. When seeking to automate IT housekeeping procedures through to business processes, such as financial period-end close, creating a centralized single set of policy definitions that have embedded parameter variables not only ensures consistency and transparency across all virtualized machines and hypervisors - it will also reduce maintenance and administration overheads.
Establishing a single metadata repository for such items as policy definitions, processing rules, and business processes is a positive step in any virtualized environment. If such a repository also holds data about the current state of play of the policies in force, which rules are in control, and processing status then such data can be used in a predictive manner to proactively determine what virtual resources might be needed near-term AND take action to make those resources available. Effort should be spent planning how metadata can be used to allow proactive management of the virtual environment.
Establishing the availability of virtual resources, determining current systems performance, and analysis of other metrics can be used at runtime to optimize the routing and dispatching of workloads. Process definitions can be dynamically configured using parameter overrides to run on the hypervisor server best suited to ensure end-user SLAs are satisfied.
In the absence of an orchestrator to automate processing, system monitors can detect system events and raise alarms in a reactive fashion. Proactive and reactive attempts to modify the virtual environment are certainly valid. However, doing neither wastes some of the potential advantages of virtualization. Both proactive and reactive adjustments of the virtual environment should be planned for.
Securing and administering all process definitions in a centralized repository will support change control management. There's no need to manually check that script updates, necessary because a new version of a backup utility is being rolled out, have been propagated to all virtual machines. Critical activities that need to be run on virtual machines are protected against unauthorized updates and illegal use. Being able to maintain a record and report on all changes made to process definitions, as well as details of who executed what, where, when, and the outcome, supports enterprises in ensuring that their use of virtualization doesn't introduce additional operational risk and is compliant with IT governance strategy.
As highlighted earlier, automation provides a highly effective alternative to manual processes. If changes to the virtualized environment are automated (e.g., though predictive use of state data, automated response to alarms, and planned changes in a business process) then one expectation should be the existence of a good solid audit trail of actions taken by the automation orchestrator. Planning for compliance is a must.
Instead of dusting down an old IT operations run book and updating it to support a virtualization strategy, enterprises need to realize that embedding knowledge and experience into automated procedures not only simplifies management and control of a virtualized world; it can also ensure smart decisions are taken at the right time in the right context. An automated approach translates into improved throughput, greater accuracy, fewer errors, and less risk. Putting technology to work by allowing it to analyze resource utilization and respond instantaneously, provisioning extra resource in a virtualized environment enhances productivity and throughput.
There's Big Data, then there's really Big Data from the Internet of Things. IoT is evolving to include many data possibilities like new types of event, log and network data. The volumes are enormous, generating tens of billions of logs per day, which raise data challenges. Early IoT deployments are relying heavily on both the cloud and managed service providers to navigate these challenges. In her session at Big Data Expo®, Hannah Smalltree, Director at Treasure Data, discussed how IoT, Big Data and deployments are processing massive data volumes from wearables, utilities and other machines...
Dec. 26, 2014 04:00 PM EST Reads: 2,217
SYS-CON Events announced today that Gridstore™, the leader in hyper-converged infrastructure purpose-built to optimize Microsoft workloads, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Gridstore™ is the leader in hyper-converged infrastructure purpose-built for Microsoft workloads and designed to accelerate applications in virtualized environments. Gridstore’s hyper-converged infrastructure is the industry’s first all flash version of HyperConverged Appliances that include both compute and storag...
Dec. 26, 2014 04:00 PM EST Reads: 1,826
The Internet of Things promises to transform businesses (and lives), but navigating the business and technical path to success can be difficult to understand. In his session at @ThingsExpo, Sean Lorenz, Technical Product Manager for Xively at LogMeIn, demonstrated how to approach creating broadly successful connected customer solutions using real world business transformation studies including New England BioLabs and more.
Dec. 26, 2014 01:00 PM EST Reads: 2,048
WebRTC defines no default signaling protocol, causing fragmentation between WebRTC silos. SIP and XMPP provide possibilities, but come with considerable complexity and are not designed for use in a web environment. In his session at @ThingsExpo, Matthew Hodgson, technical co-founder of the Matrix.org, discussed how Matrix is a new non-profit Open Source Project that defines both a new HTTP-based standard for VoIP & IM signaling and provides reference implementations.
Dec. 26, 2014 12:30 PM EST Reads: 1,873
DevOps Summit 2015 New York, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that it is now accepting Keynote Proposals. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
Dec. 26, 2014 12:00 PM EST Reads: 1,703
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at Internet of @ThingsExpo, James Kirkland, Chief Architect for the Internet of Things and Intelligent Systems at Red Hat, described how to revolutioniz...
Dec. 26, 2014 11:15 AM EST Reads: 2,324
Connected devices and the Internet of Things are getting significant momentum in 2014. In his session at Internet of @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, examined three key elements that together will drive mass adoption of the IoT before the end of 2015. The first element is the recent advent of robust open source protocols (like AllJoyn and WebRTC) that facilitate M2M communication. The second is broad availability of flexible, cost-effective storage designed to handle the massive surge in back-end data in a world where timely analytics is e...
Dec. 26, 2014 11:00 AM EST Reads: 1,926
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys.
Dec. 26, 2014 11:00 AM EST Reads: 2,083
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example t...
Dec. 26, 2014 11:00 AM EST Reads: 2,168
The 3rd International Internet of @ThingsExpo, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that its Call for Papers is now open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.
Dec. 26, 2014 10:00 AM EST Reads: 7,129
How do APIs and IoT relate? The answer is not as simple as merely adding an API on top of a dumb device, but rather about understanding the architectural patterns for implementing an IoT fabric. There are typically two or three trends: Exposing the device to a management framework Exposing that management framework to a business centric logic Exposing that business layer and data to end users. This last trend is the IoT stack, which involves a new shift in the separation of what stuff happens, where data lives and where the interface lies. For instance, it's a mix of architectural styles ...
Dec. 26, 2014 09:00 AM EST Reads: 2,037
The definition of IoT is not new, in fact it’s been around for over a decade. What has changed is the public's awareness that the technology we use on a daily basis has caught up on the vision of an always on, always connected world. If you look into the details of what comprises the IoT, you’ll see that it includes everything from cloud computing, Big Data analytics, “Things,” Web communication, applications, network, storage, etc. It is essentially including everything connected online from hardware to software, or as we like to say, it’s an Internet of many different things. The difference ...
Dec. 26, 2014 09:00 AM EST Reads: 2,245
The security devil is always in the details of the attack: the ones you've endured, the ones you prepare yourself to fend off, and the ones that, you fear, will catch you completely unaware and defenseless. The Internet of Things (IoT) is nothing if not an endless proliferation of details. It's the vision of a world in which continuous Internet connectivity and addressability is embedded into a growing range of human artifacts, into the natural world, and even into our smartphones, appliances, and physical persons. In the IoT vision, every new "thing" - sensor, actuator, data source, data con...
Dec. 26, 2014 08:30 AM EST Reads: 2,399
P2P RTC will impact the landscape of communications, shifting from traditional telephony style communications models to OTT (Over-The-Top) cloud assisted & PaaS (Platform as a Service) communication services. The P2P shift will impact many areas of our lives, from mobile communication, human interactive web services, RTC and telephony infrastructure, user federation, security and privacy implications, business costs, and scalability. In his session at @ThingsExpo, Robin Raymond, Chief Architect at Hookflash, will walk through the shifting landscape of traditional telephone and voice services ...
Dec. 26, 2014 07:30 AM EST Reads: 2,131
"Matrix is an ambitious open standard and implementation that's set up to break down the fragmentation problems that exist in IP messaging and VoIP communication," explained John Woolf, Technical Evangelist at Matrix, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 26, 2014 06:00 AM EST Reads: 1,949
An entirely new security model is needed for the Internet of Things, or is it? Can we save some old and tested controls for this new and different environment? In his session at @ThingsExpo, New York's at the Javits Center, Davi Ottenheimer, EMC Senior Director of Trust, reviewed hands-on lessons with IoT devices and reveal a new risk balance you might not expect. Davi Ottenheimer, EMC Senior Director of Trust, has more than nineteen years' experience managing global security operations and assessments, including a decade of leading incident response and digital forensics. He is co-author of t...
Dec. 26, 2014 06:00 AM EST Reads: 2,345
We are reaching the end of the beginning with WebRTC, and real systems using this technology have begun to appear. One challenge that faces every WebRTC deployment (in some form or another) is identity management. For example, if you have an existing service – possibly built on a variety of different PaaS/SaaS offerings – and you want to add real-time communications you are faced with a challenge relating to user management, authentication, authorization, and validation. Service providers will want to use their existing identities, but these will have credentials already that are (hopefully) i...
Dec. 26, 2014 05:00 AM EST Reads: 1,896
The 3rd International @ThingsExpo, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that it is now accepting Keynote Proposals. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades.
Dec. 26, 2014 04:15 AM EST Reads: 2,690
The Internet of Things will greatly expand the opportunities for data collection and new business models driven off of that data. In her session at @ThingsExpo, Esmeralda Swartz, CMO of MetraTech, discussed how for this to be effective you not only need to have infrastructure and operational models capable of utilizing this new phenomenon, but increasingly service providers will need to convince a skeptical public to participate. Get ready to show them the money!
Dec. 26, 2014 04:00 AM EST Reads: 1,997
Dec. 26, 2014 02:45 AM EST Reads: 2,150