Welcome!

Web 2.0 Authors: Shelly Palmer, Frank Huerta, Michael Bushong, Sandi Mappic, David Smith

Related Topics: Web 2.0, Java, SOA & WOA, .NET

Web 2.0: Article

Testing Basics Might Have Averted Obamacare Health Site Fiasco

Attention to testing best practices could’ve avoided a hellish user experience and bad PR

It made headlines for all the wrong reasons when it launched on October 1, but things could have been so different for the HealthCare.gov website if only it had been tested properly before release. Users trying to enroll encountered all sorts of glitches, including very slow page updates, "page not found" errors and frequent crashes.

Early server outages were blamed on an unexpectedly high volume of traffic as nearly 5 million Americans tried to access the website on day one, but it soon emerged that serious flaws existed in the software, and the security was not properly assessed or signed off.

According to CBS, the security testing was never completed. Fox uncovered a testing bulletin from the day before the launch that revealed the site could only handle 1,100 users "before response time gets too high." The Washington Examiner revealed, via an anonymous source, that the full testing was delayed until just a few days before the launch and instead of the 4 to 6 months of testing that should have been conducted it was only tested for 4 to 6 days.

Amid the apologies, the resignations, and the frantic efforts to fix it up by the end of November, there are serious and important lessons to be learned. A proper testing plan with a realistic schedule would have prevented this catastrophe.

Start with an Estimate
It's incredibly rare for any software to be released with zero defects, but major functional bugs and inadequate security is certainly avoidable if you plan correctly. That starts with a realistic estimate of the scope of the testing that's required. The QA department must be consulted and asked to use their experience to provide a picture of how much testing is needed.

That plan will be based on documentation outlining the requirements of the software and discussion with the developers, as well as the wealth of experience that testing professionals possess. If requirements change significantly, or new requests are introduced, then the plan must be altered to cater for that. This is one major area where things obviously went awry. According to the Washington Examiner's source there were "ever-changing, conflicting and exceedingly late project directions. The actual system requirements for Oct. 1 were changing up until the week before."

This is a clear recipe for disaster.

Agile Testing
Modern software development is typically based on Agile methodology where requirements are built into the system quickly and feedback informs the project going forward. This approach does not mesh with traditional testing where testers would work out a comprehensive test plan based on detailed documentation, and then carry out that testing in a predefined block at the end of the project.

To adapt testing for modern software development it pays to get testers involved earlier in the process. They need to understand the system and really identify with the end user. It's much more cost effective to fix flaws and bugs sooner rather than later.

There's a logistical consideration as well. Each new build means a full regression test, bug fix verification, and a healthy dose of exploratory testing to make sure the new features are working as intended. It's important for the test team to scale up as the amount of work grows, and as much of the regression testing as possible should be automated to reduce the workload.

Exploratory Testing
With a fast-paced development it is absolutely vital to get experienced testers and have them perform some level of exploratory testing. This combines their knowledge about how the system should work with educated guesses about where it might fail. It's also very useful when documentation is lacking because testers can effectively design and execute tests at the same time.

Targeted exploratory testing is the perfect complement to scripted testing. It requires some creative thinking and some freedom for the tester, but it can be a great way of emulating an end user and ensuring that specific features and functions actually deliver what they're supposed to. Properly recorded by good cloud-based testing tools, the data can be used to provide clarity for developers trying to fix problems, and it can serve as the basis of scripted testing or even automated tests in the future.

Test Management
A project such as this, where disparate teams have to work together toward a common goal, can be an integration nightmare. Test management can be a real challenge, so the right tool is invaluable. The full lifecycle of every defect or requirement should be recorded to produce a clear chain from the original feature request, through the test case, to the defect, and on to repeated test cycles. It has to be clear who is responsible for each action every step of the way, so the blame game can be avoided entirely.

The ultimate aim is traceability, usability, and transparency.

If this data is gathered then it becomes easier to apply root cause analysis at a later date and discover where things went wrong. Remember that the earlier you can catch and fix the defect, the cheaper and easier it is to do. Identifying the root causes of the problems with the HealthCare.gov website requires an objective analysis of the original requirements, the documentation, the code implementation and integration, the test planning, and the test cycles. Understanding what went wrong through this process could ensure that the same mistakes are not made again in the future.

Knowing When to Pull the Trigger
Kathleen Sebelius, the health and human services secretary, apologized for her part in the botched website launch, but the real problem, and her cardinal sin, was to tell Obama that the website was ready to be launched in the first place.

QA departments are not the gatekeepers for projects, business decisions are always going to trump everything else, and the pressure to deliver ensures that every project launches with defects in it, but you ignore them at your peril. If the testers had been consulted about the state of the website and the back end before launch, you can bet they would have pointed out that it wasn't ready for prime time. A one- or two-month delay would undoubtedly have been greeted with some alarm and criticism, but it would have caused far less damaging PR than releasing an unfinished and potentially insecure product.

More Stories By Vu Lam

Vu Lam is founder and CEO of QASymphony, developers of defect capture tools that track user interactions with applications. He was previously with First Consulting Group and was an early pioneer in Vietnam’s offshore IT services industry since 1995. He holds an MS degree in electrical engineering from Purdue University. You may reach him at [email protected]

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.