Welcome!

Agile Computing Authors: Elizabeth White, Liz McMillan, ManageEngine IT Matters, Steven Lamb, Sematext Blog

Related Topics: Microservices Expo, Microsoft Cloud, Containers Expo Blog, API Journal, Agile Computing, @CloudExpo

Microservices Expo: Article

Better Services Mean Better Performance

Insurance leader AIG drives business transformation and IT service performance through center of excellence model

Welcome to the latest edition of the HP Discover Performance Podcast Series. Our next discussion examines how global insurance leader American International Group (AIG) has leveraged a performance center of excellence (COE) to help drive business transformation.

We learn in our discussion how AIG's Global Performance Architecture Group improved performance of their services to deliver better experiences and payoffs for businesses and end-users alike.

Here to explore these and other enterprise IT issues, we're joined by our co-host for this sponsored podcast, Chief Software Evangelist at HP, Paul Muller.

And we also welcome our special guest, Abe Naguib, Senior Director of AIG’s Global Performance Architecture Group. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:

Gardner: Many organizations are now focusing more on the user experience and the business benefits and less on pure technology -- and for many, it's a challenge. From a very high level, how do you perceive the best way to go about a cultural shift, or an organizational shift, from a technology focus more toward this end-user experience focus?

The CIO has to keep his eye forward to periodically change tracks, ensuring that the customers are getting the best value for their money.

Naguib: There are several paradigms involved from the COO and CFO’s push on innovation and efficiency. A lot of the tooling that we use, a lot of the products we use, help to fully diversify and resolve some of the challenges we have. That’s to keep change running.

Abe Naguib

The CIO has to keep his eye forward to periodically change tracks, ensuring that the customers are getting the best value for their money. That’s a tall order and, he has to predict benefit, gauge value, maintain integrity, socialize, and evolve the strategy of business ideas on how technology should run.

We have to manage quite a few challenges from the demand of operating a global franchise. Our COE looks at various levels of optimization and one key target is customer service, and factors that drive the value chain.

That’s aligning DevOps to business, reducing data-center sprawl, validating and making sense of vendors, products, and services, increasing the return on investment (ROI) and total cost of ownership (TCO) of emerging technologies, economy of scale, improving services and hybrid cloud systems, as we isolate and identify the cascading impacts on systems. These efforts help to derive value across the chain and eventually help improve customer value.

Gardner: Paul Muller, does this jibe with what you're seeing in the field? Do you see an emphasis that’s more on this sort of process level, when it comes to IT with of course more input from folks like the COO and the chief financial officer?

Level of initiatives

Muller: As I was listening to Abe's description I was thinking that you really can tell the culture of an organization by the level of initiatives, and thinking that it has. In fact, you can't change one without changing the other. What I've just described is a very high level of cultural maturity.

Paul Muller

We do see it, but we see it in maybe 10 to 15 percent of organization that have gone through the early stages of understanding the performance and quality of applications, optimizing it for cost and performance, but then moving through to the next stage, reevaluating the entire chain, and looking to take a broader perspective with lots of user experience. So it's not unique, but it's certainly used among the more mature in terms of observational thinking.

Gardner: Tell us about AIG, its breadth, and particularly the business requirements that your Global Performance Architecture Group is tasked with meeting.

Naguib: AIG is a leading international insurance organization, across 130 countries. AIG’s companies serving commercial, institutional, individual customers, through one of the world’s most extensive property/casualty networks, are leading providers of life insurance and retirement services in the US.

Among the brand pillars that we focused on are integrity, innovation, and market agility across the variety of products that we offer, as well as customer service.

Bringing together our business-critical and strategic drivers across IT’s various segments fosters alignment, agility, and eventually unity.

With AIG’s mantra of "better, faster, cheaper," my organization’s people, strategy, and comprehensive tools help us to bridge these gaps that a global firm faces today. There are many technology objectives across different organizations that we align, and we utilize various HP solutions to drive our objectives, which is getting the various IT delivery pistons firing in the same direction and at the right time.

These include performance, application lifecycle management (ALM), and business service management (BSM), as well as project and portfolio management (PPM). Over time our Global Performance organization has evolved, and our senior manager realized our strategic benefit and capability to reduce cost, risk, and mitigate production and risk.

Our role eventually moved out of quality assurance's QA’s functional testing area to focus on emphasizing application performance, architecture design patterns, emerging technologies, infrastructure and consolidation strategies, and risk mitigation, as well increasing ROI and economy of scale. With the right people, process, and tools, our organization enabled IT transparency and application tuning, reduced infrastructure consumption, and accelerated resolution of any system performances in dev and production.

The key is bringing together our business-critical and strategic drivers across IT’s various segments fosters alignment, agility, and eventually unity. Now, our leaders seek our guidance to help tune IT at some degree of financial performance to unlock optimal business value.

Culture of IT

Gardner: Is that a pattern you're seeing, that the people in QA are in the sense breaking out of just an application performance level and moving more into what we could call IT performance level?

Naguib: In the last six or seven years, there's less focus on just basic performance optimization. The focus is now on business strategy impact on infrastructure CAPEX, and OPEX. Correlating business use cases to impact on infrastructure is the golden grail.

I always say that software drives the hardware.

Once you start communicating to CIOs the impact of a system and the cost of hosting, licensing, headcount, service sprawl, branding, and services that depend on each other, we're more aligning DevOps with business.

Muller: I just had a conversation not three weeks ago with a financial institution in another part of the world. I asked who is responsible for your end-to-end business process -- in this case I think it was mortgage origination -- and the entire room looked at each other, laughed, and said "We don't know."

So you've really got this massive gap in terms of not just IT process maturity, but you also have business-process maturity, and it's very challenging, in my experience, to have one without having the other.

Gardner: I think we have to recognize too that most businesses now realize that software is such an integral part of their business success. Being adept at software, whether it's writing it, customizing it, implementation and integration, or just overall lifecycle has become kind of the lifeblood of business, not just an element of IT. Do you sense that, Abe, that software is given more clout in your organization?

Naguib: Absolutely Dana. I truly believe that. I've been kind of an internal evangelist on this, but I always say that software drives the hardware. Whether I communicate with the enterprise architects, the dev teams, the infrastructure teams, software frankly does drive the hardware.

That's really the key point here. If you start managing your root cost and performance from a software perspective and then work your way out, you’ve got the key to unlocking everything from efficiencies to optimizing your ROI and to addressing TCO over time. It's all business driven. Know your use cases. Know how it impacts your software, which impacts your infrastructure.

Converged infrastructure

Gardner: Just being productive for its own sake isn’t good enough in this economy. We have to show real benefits, and you have to measure those benefits. Maybe you have some way to translate how this actually does benefit your customers. Any metrics of success you can share with us, Abe?

Naguib: Yes, during our initial requirements-gathering phase with our business leaders, we start defining appropriate test-modeling strategy, including volumetrics, and managing and understanding the deployment pattern with subscriber demographics and user roles. We start aligning DevOps organizations with business targets which improves delivery expectations, ROI, TCO, and capacity models.

The big transformation taking place right now is that our organization is connecting different silos of IT delivery, in particular development, quality, and operations.

Then, before production, our Application Performance Engineering (APE) team identifies weak spots to provide the production team with a reusable script setting thresholds on exact hotspots in a system, so that eventually in production, they can take appropriate productive measures. Now, this is value add.

Muller: As we’re seeing across the planet at the moment, there's a recognition that to bring great software and information is really a function of getting Layers 1 through 7 in the technology stack working, but it's also about getting Layer 8 working. Layer 8, in this case, is the people. Unfortunately, being technologists, we often forget about the people in this process.

What Abe just described is a great representation of the importance of getting not just a functional part of IT, in this case quality and performance working well, but it's about recognizing the software will one day be delivered to operational staff to internally monitor and manage it in a production setting.

The big transformation taking place right now is that our organization is connecting different silos of IT delivery, in particular development, quality, and operations, to help them accelerate the release of quality applications, and to automate things like threshold setting, and optimize monitoring of metrics ahead of time. Rather than discovering that an application might fail to perform in a production setting, where you've got users screaming at you, you get all of that work done ahead of time.

Sharing and trust

You create a culture of sharing and trust between development, quality, and operations that frankly doesn’t exist in a lot of process where the relationship between development and operations is pretty strained.

Gardner: Abe, how do you measure this? We recognized the importance of the metrics, but is there a new coin of the realm in terms of measurement? How do you put this into a standardized format that you’re going to take to your CFO and your COO and say here’s what's really happening?

Naguib: That's a good question. Tying into what Paul was saying, nobody cared about whether we improved performance by three seconds or two seconds. You care at the front end, when you hear users grumbling. The bottom line is how the application behaves, translating that into business impact as well as IT impact.

Business impact is what are the dollar values to make key use cases and transactions that don't scale. Again, software drives the hardware. If an application consumes more hardware, the hardware is cheap now-a-days, but licenses aren’t. You have database and you have middleware products running in that environment, whether it's on-premise or in the cloud.

The point is that impact should be measured, and that's how we started communicating results through our organization. That's when we started seeing C-level officers tuning in and realizing the impact of performance of both to the bottom line, even to the top line.

We were able to leverage consistent dashboards across different IT solutions internally, then target weak spots and help drive optimization.

Our role is to provide more insight earlier and quicker to the right people at the right time.

Leveraging HP’s partnership and solutions helped us to address technologies, whether Web 2.0, client-server, legacy systems, Web, cloud-based, or hybrid models. We were able to leverage consistent dashboards across different IT solutions internally, then target weak spots and help drive optimization, whether on premise or cloud.

Muller: In the enterprise today, it's all about getting your ideas out of your head and making them a reality. As Abe just described, most of the best ideas today that are on their way into business processes you can ultimately turn into software. So success is really all about having the best applications and information possible.

Understand maturity

The challenge is understanding how the technology, the business process and the benefits come together and then orchestrating that the delivery of that benefit to your organization. It's not something that can be done without a deliberate focus on process. Again, the challenge is always understanding your organization's maturity, not just from an IT standpoint, but importantly from a broader standpoint.

Naguib: What's the common driver for all? Money talks. Translating things into a dollar value started to bring groups together to understand what we can do better to improve our process.

What we're seeing more is that it's not just internal dev and ops that we're aligning with, or even our business service level expectations. It's also partnerships with key vendors that have opened up the roadmap to align our technologies, requirements, and our challenges into those solutions.

The gains we make are simple. They can be boiled down into three key benefits: savings, performance, and business agility. Leveraging HP's ALM solutions helps us drive IT and business transformation and unlock resources and efficiencies. That helps streamline delivery and an increased reliability of our mission critical systems.

After we've dealt with tuning, we can help activate post-production monitoring using the same script, understanding where the weak spots are.

My favorite has always been HP's LoadRunner Performance Center. It’s basically our Swiss Army Knife to support diverse platform technologies and align business use cases to the impact on IT and infrastructure via SiteScope, HP SiteScope.

We're able to deep dive into the diagnostics, if needed. And the best part is, after we've dealt with tuning, we can help activate post-production monitoring using the same script, understanding where the weak spots are.

So the tools are there. The best part is integrated, and actually work together very well.

Gardner: It really sounds like you've grabbed onto this system-of-record concept for IT, almost enterprise resource planning (ERP) for IT. Is that fair?

Naguib: That's a good way to put it.

Muller: One of the questions I get a lot from organizations is how we measure and reflect the benefit. What hard data have you managed to get?

Three-month study

Naguib: IDC came in and did an extensive three-month study, and it was interesting what they have found. We've realized a saving of more than $11 million annually for the past five years by increasing our economy of scale. Scale on a system allows more applications on the same host.

It's an efficiency from both hardware and software. They also found that our using solutions from HP increased staff productivity by over $300,000 a year. Instead of fighting fires, we're actually now focusing on innovation, and improving business reliability by over $600,000 a year.

So all that together shows a recoup, a five-year ROI, about 577 percent. I was very excited about that study. They also showed that we resolved mean time resolution over 70 percent through production debugging, root cause, and resolution efforts.

So what we found, and technologists would agree with me, is that today, with hardware being cheaper than software, there is a hidden cost associated with hosting an application. The bottom line, if we don’t test and tune our applications holistically, either the architecture, code, infrastructure, and shared services, these performance issues can quickly degrade quality of service, uptime, and eventually IT value.

I have a saying, which is that quality costs money but bad quality costs more.

Gardner: Abe, any recommendations that you might have for other organizations that are thinking of moving in this direction and that want to get more mature, as Paul would say. What are some good things to keep in mind as you start down this path?

Naguib: Besides software drives the hardware -- and I can't stress that enough -- are all the ways to understand business impact and translate whatever you're testing into the business model.

What happens to the scenarios such as outages? What happens when things are delayed? What is the impact on business operability, productivity, liability, customer branding. There are so many details that stem from performance. We used to be dealing with the "Google factor" of two-second response time, but now, we're getting more like millisecond response, because there are so many interdependencies between our systems and services.

Another fact is that a lot of products come into our doors on a daily basis. Modern technologies come in with a lot of promises and a lot of commitments.

Identify what works

So it's being able to weed through the chaff, identify what works, how the interdependencies work, and then, being able to partner with vendors of those solutions and services. Having tools that add transparency into their products and align with our environment helps bring things together more. Treating IT like a business by translating the impact into dollar value, helps to get lined up and responsive.

Muller: It might be a little controversial here, but the first step for progress on all of this is look in the mirror and understand your organization and its level of maturity. You really need to assess that very self-critically before you start. Otherwise, you're going to burn a lot of capital, a lot of time, and a lot of credibility trying to make a change to an organization from state A to state B. If you don’t understand the level of maturity of your present state before you start working on the desired state, you can waste a lot of time and money. It's best to look in the mirror.

The second step is to make sure that, before you even begin that process, you create that alignment and that desired state in the construct of the business. Make sure that your maturity aligns to the business's maturity and their goal. I just described the ability to measure the business impact in terms of revenue of IT services. Many companies can’t even do something as fundamental as that. It can be really hard to drive alignment, unless you’ve got business-IT alignment ahead of time.

I have said this so many times. The technology is a manageable problem, Layers 1 through 7, including management software to a certain degree, have solved problems the most time. Solving the problem of Layer 8 is tough. You can reboot the server, but you can’t reboot a person.

Solving the problem of Layer 8 is tough. You can reboot the server, but you can’t reboot a person.

I always recommend bringing along some sort of management of organizational change function. In our case, we actually have a number of trained organizational psychologists working for us who understand what it takes to get several hundred, sometimes several thousand, people to change the way they behave, and that’s really important. You’ve got to bring the people along with it.

Gardner: I'd like to thank our supporter for this series, HP Software, and remind our audience to carry on the dialogue with Paul Muller through the Discover Performance Group on LinkedIn, and also to follow Raf on his popular blog, Following the White Rabbit.

You can also gain more insights and information on the best of IT performance management at http://www.hp.com/go/discoverperformance.

And you can always access this and other episodes in our HP Discover Performance Podcast Series at hp.com and on iTunes under BriefingsDirect.

You may also be interested in:

More Stories By Dana Gardner

At Interarbor Solutions, we create the analysis and in-depth podcasts on enterprise software and cloud trends that help fuel the social media revolution. As a veteran IT analyst, Dana Gardner moderates discussions and interviews get to the meat of the hottest technology topics. We define and forecast the business productivity effects of enterprise infrastructure, SOA and cloud advances. Our social media vehicles become conversational platforms, powerfully distributed via the BriefingsDirect Network of online media partners like ZDNet and IT-Director.com. As founder and principal analyst at Interarbor Solutions, Dana Gardner created BriefingsDirect to give online readers and listeners in-depth and direct access to the brightest thought leaders on IT. Our twice-monthly BriefingsDirect Analyst Insights Edition podcasts examine the latest IT news with a panel of analysts and guests. Our sponsored discussions provide a unique, deep-dive focus on specific industry problems and the latest solutions. This podcast equivalent of an analyst briefing session -- made available as a podcast/transcript/blog to any interested viewer and search engine seeker -- breaks the mold on closed knowledge. These informational podcasts jump-start conversational evangelism, drive traffic to lead generation campaigns, and produce strong SEO returns. Interarbor Solutions provides fresh and creative thinking on IT, SOA, cloud and social media strategies based on the power of thoughtful content, made freely and easily available to proactive seekers of insights and information. As a result, marketers and branding professionals can communicate inexpensively with self-qualifiying readers/listeners in discreet market segments. BriefingsDirect podcasts hosted by Dana Gardner: Full turnkey planning, moderatiing, producing, hosting, and distribution via blogs and IT media partners of essential IT knowledge and understanding.

@ThingsExpo Stories
There will be new vendors providing applications, middleware, and connected devices to support the thriving IoT ecosystem. This essentially means that electronic device manufacturers will also be in the software business. Many will be new to building embedded software or robust software. This creates an increased importance on software quality, particularly within the Industrial Internet of Things where business-critical applications are becoming dependent on products controlled by software. Qua...
In addition to all the benefits, IoT is also bringing new kind of customer experience challenges - cars that unlock themselves, thermostats turning houses into saunas and baby video monitors broadcasting over the internet. This list can only increase because while IoT services should be intuitive and simple to use, the delivery ecosystem is a myriad of potential problems as IoT explodes complexity. So finding a performance issue is like finding the proverbial needle in the haystack.
Machine Learning helps make complex systems more efficient. By applying advanced Machine Learning techniques such as Cognitive Fingerprinting, wind project operators can utilize these tools to learn from collected data, detect regular patterns, and optimize their own operations. In his session at 18th Cloud Expo, Stuart Gillen, Director of Business Development at SparkCognition, discussed how research has demonstrated the value of Machine Learning in delivering next generation analytics to imp...
Large scale deployments present unique planning challenges, system commissioning hurdles between IT and OT and demand careful system hand-off orchestration. In his session at @ThingsExpo, Jeff Smith, Senior Director and a founding member of Incenergy, will discuss some of the key tactics to ensure delivery success based on his experience of the last two years deploying Industrial IoT systems across four continents.
The 19th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Digital Transformation, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportuni...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develo...
Basho Technologies has announced the latest release of Basho Riak TS, version 1.3. Riak TS is an enterprise-grade NoSQL database optimized for Internet of Things (IoT). The open source version enables developers to download the software for free and use it in production as well as make contributions to the code and develop applications around Riak TS. Enhancements to Riak TS make it quick, easy and cost-effective to spin up an instance to test new ideas and build IoT applications. In addition to...
IoT is rapidly changing the way enterprises are using data to improve business decision-making. In order to derive business value, organizations must unlock insights from the data gathered and then act on these. In their session at @ThingsExpo, Eric Hoffman, Vice President at EastBanc Technologies, and Peter Shashkin, Head of Development Department at EastBanc Technologies, discussed how one organization leveraged IoT, cloud technology and data analysis to improve customer experiences and effi...
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with the 19th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world and ThingsExpo Silicon Valley Call for Papers is now open.
"We've discovered that after shows 80% if leads that people get, 80% of the conversations end up on the show floor, meaning people forget about it, people forget who they talk to, people forget that there are actual business opportunities to be had here so we try to help out and keep the conversations going," explained Jeff Mesnik, Founder and President of ContentMX, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
With 15% of enterprises adopting a hybrid IT strategy, you need to set a plan to integrate hybrid cloud throughout your infrastructure. In his session at 18th Cloud Expo, Steven Dreher, Director of Solutions Architecture at Green House Data, discussed how to plan for shifting resource requirements, overcome challenges, and implement hybrid IT alongside your existing data center assets. Highlights included anticipating workload, cost and resource calculations, integrating services on both sides...
Manufacturers are embracing the Industrial Internet the same way consumers are leveraging Fitbits – to improve overall health and wellness. Both can provide consistent measurement, visibility, and suggest performance improvements customized to help reach goals. Fitbit users can view real-time data and make adjustments to increase their activity. In his session at @ThingsExpo, Mark Bernardo Professional Services Leader, Americas, at GE Digital, discussed how leveraging the Industrial Internet a...
Big Data engines are powering a lot of service businesses right now. Data is collected from users from wearable technologies, web behaviors, purchase behavior as well as several arbitrary data points we’d never think of. The demand for faster and bigger engines to crunch and serve up the data to services is growing exponentially. You see a LOT of correlation between “Cloud” and “Big Data” but on Big Data and “Hybrid,” where hybrid hosting is the sanest approach to the Big Data Infrastructure pro...
"My role is working with customers, helping them go through this digital transformation. I spend a lot of time talking to banks, big industries, manufacturers working through how they are integrating and transforming their IT platforms and moving them forward," explained William Morrish, General Manager Product Sales at Interoute, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
A critical component of any IoT project is what to do with all the data being generated. This data needs to be captured, processed, structured, and stored in a way to facilitate different kinds of queries. Traditional data warehouse and analytical systems are mature technologies that can be used to handle certain kinds of queries, but they are not always well suited to many problems, particularly when there is a need for real-time insights.
The best-practices for building IoT applications with Go Code that attendees can use to build their own IoT applications. In his session at @ThingsExpo, Indraneel Mitra, Senior Solutions Architect & Technology Evangelist at Cognizant, provided valuable information and resources for both novice and experienced developers on how to get started with IoT and Golang in a day. He also provided information on how to use Intel Arduino Kit, Go Robotics API and AWS IoT stack to build an application tha...
IoT generates lots of temporal data. But how do you unlock its value? You need to discover patterns that are repeatable in vast quantities of data, understand their meaning, and implement scalable monitoring across multiple data streams in order to monetize the discoveries and insights. Motif discovery and deep learning platforms are emerging to visualize sensor data, to search for patterns and to build application that can monitor real time streams efficiently. In his session at @ThingsExpo, ...
You think you know what’s in your data. But do you? Most organizations are now aware of the business intelligence represented by their data. Data science stands to take this to a level you never thought of – literally. The techniques of data science, when used with the capabilities of Big Data technologies, can make connections you had not yet imagined, helping you discover new insights and ask new questions of your data. In his session at @ThingsExpo, Sarbjit Sarkaria, data science team lead ...
Extracting business value from Internet of Things (IoT) data doesn’t happen overnight. There are several requirements that must be satisfied, including IoT device enablement, data analysis, real-time detection of complex events and automated orchestration of actions. Unfortunately, too many companies fall short in achieving their business goals by implementing incomplete solutions or not focusing on tangible use cases. In his general session at @ThingsExpo, Dave McCarthy, Director of Products...
Amazon has gradually rolled out parts of its IoT offerings in the last year, but these are just the tip of the iceberg. In addition to optimizing their back-end AWS offerings, Amazon is laying the ground work to be a major force in IoT – especially in the connected home and office. Amazon is extending its reach by building on its dominant Cloud IoT platform, its Dash Button strategy, recently announced Replenishment Services, the Echo/Alexa voice recognition control platform, the 6-7 strategic...