Welcome!

Web 2.0 Authors: Dana Gardner, Mike Hicks, Yeshim Deniz, Carmen Gonzalez, Brian Vandegrift

Related Topics: Big Data Journal, Java, Linux, AJAX & REA, Web 2.0, Cloud Expo

Big Data Journal: Article

Software Quality Metrics for Your Continuous Delivery Pipeline | Part I

Even small changes need to be tracked and their impact on overall software quality must be measured

How often do you deploy new software? Once a month, once a week or every hour? The more often you deploy the smaller your changes will be. That's good! Why? Because smaller changes tend to be less risky since it's easier to keep track of what has really changed. For developers, it's certainly easier to fix something you worked on three days ago than something you wrote last summer. An analogy from a recent conference talk from AutoScout24 is to think about your release like a container ship, and every one of your changes is a container on that ship:

Your next software release en route to meet its iceberg

If all you know is that you have a problem in one of our containers you'd have to unpack and check all of them. That doesn't seem to make sense for a ship, and neither does it for a release. But that's still what happens quite frequently when a deployment fails and all you get is "it didn't work." In contrast, if you were shipping just a couple of containers you would be able to replace your giant, slow-maneuvering vessel with something faster and more agile - and if you're looking for a problem, you'd only have to inspect a handful of containers. While adopting this practice in the shipping industry would be a rather costly approach, this is exactly what continuous delivery allows us to do: Deploy more often, get faster feedback, and fix problems faster.

A great example is Amazon, who shared their success metrics at Velocity:

Some impressive stats from Amazon showing the success of rapid continuous delivery

However - even small changes can have severe impacts. Examples?

  1. Heavy DOM Manipulations through JavaScript: Introduced through a "harmless" new JavaScript library for tracking link clicks
  2. Memory Leaks in Production: Introduced by a not well tested remote logging framework downloaded on GitHub
  3. Performance Impact of Exceptions in Ops: Ops and Dev did not follow the same deployment steps (lack of automation scripts) resulting in thousands of exceptions and maxes out CPU on all app servers

Extending Your Delivery Pipeline
Even small changes need to be tracked and their impact on overall software quality must be measured along the delivery pipeline so that your quality gates can stop even the smallest change from causing a huge issue. The three examples above could have been avoided when automatically looking at the following measures across the delivery pipeline and stopping the delivery when "architectural" regressions are detected:

  • The number of DOM manipulations
  • Memory usage or object churn rate per transaction
  • Number of exceptions, number of database queries or number of log entries.

In a series of blog posts I will introduce you to metrics that you have to measure along your pipeline to act as an additional quality measure mechanism in order to prevent problems listed above. It is important that:

  • Developers get these measurements in the commit stage
  • Automation Engineers need to measure them for the automated unit and integration tests
  • Performance Engineers add them to the load testing reports you do in staging
  • Operations verify how the real application behaves after a new deployment in production

For each metric I introduce, I'll explain why it is important to monitor it, which types of problems can be detected and how Developers, Testers and Operations can monitor these metrics. To ready more on this, click here for the full article.

More Stories By Andreas Grabner

Andreas Grabner has more than a decade of experience as an architect and developer in the Java and .NET space. In his current role, Andi works as a Technology Strategist for Compuware and leads the Compuware APM Center of Excellence team. In his role he influences the Compuware APM product strategy and works closely with customers in implementing performance management solutions across the entire application lifecycle. He is a frequent speaker at technology conferences on performance and architecture-related topics, and regularly authors articles offering business and technology advice for Compuware’s About:Performance blog.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.