|By Andreas Grabner||
|October 6, 2014 04:00 AM EDT||
Web Service Monitoring 101: Identifying Bad Deployments
Have you ever deployed a change to production and thought "All went well - Systems are operating as expected!" but then you had to deal with users complaining that they keep running into errors?
When deployments fail you don't want your users to be the first to tell you about it: Sit down with the Business and Dev to define how and what to monitor
We recently moved some of our systems between two of our data centers - even moving some components to the public cloud. Everything was prepared well, system monitoring was set up and everyone gave the thumbs up to execute the move. Immediately following, our Operations dashboards continued to show green. Soon thereafter I received a complaint from a colleague who reported that he couldn't use one of the migrated services (our free dynaTrace AJAX Edition) anymore as the authentication web service seemed to fail. The questions we asked ourselves were:
- Impact: Was this a problem related to his account only or did it impact more users?
- Root Cause: What is the root cause and how was this problem introduced?
- Alerting: Why don't our Ops monitoring dashboards show any failed web service calls?
It turned out that the problem was in fact
- Caused by an outdated configuration file deployment
- It only impacted employees whose accounts were handled by a different authentication back-end service
- Didn't show up in Ops dashboards because the used SOAP Framework always return HTTP 200 transporting any success/failure information in the response body which doesn't show up in any web server log file
In this blog I give you a little more insight on how we triaged the problem and some best practices we derived from that incident in order to level-up technical implementations and production monitoring. Only if you monitor all your system components and correlate the results with deployment tasks will you be able to deploy with more confidence without disrupting your business.
Bad Monitoring: When Your End Users become your Alerting System
So - when I got a note from a colleague that he could no longer use dynaTrace AJAX Edition to analyze the web site performance of a particular web site, I launched my copy to verify this behavior. It failed with my credentials, which proved that it was not a local problem on my colleague's machine:
Business Problem: Our end users can't use our free product due to failing authentication service
Asking our Ops Team that manages and monitors these web services resulted in the following response:
"We do not see any errors on the Web Server nor do we have any reported availability problems on our authentication service. It's all green on our infrastructure dashboards as can be seen on the following screenshot:"
Infrastructure is all green: No HTTP-based errors or SLA problems based on IIS log or on any of the resources on the host
Web Server Log Monitoring Is Not Enough
As mentioned in the initial paragraph, it turned out that our SOAP Framework always returns HTTP 200 with the actual error in the response body. This is not an uncommon "Best (or worst) Practice" as you can see for instance on the following discussion on GitHub.
The problem with that approach though is that "traditional" operations monitoring based on web server log files will not detect any of these "logical/business" problems. As you don't want to wait until your users start complaining, it's time to level-up your monitoring approach. How can this be done? Those developing and those monitoring the system need to sit down and figure out a way how to monitor the usage of these services and need to talk with business to figure out which level of detail to report and alert on.
How can you find out if your current monitoring approach works? Start by looking more closely at problems reported by your users but that you don't get any automatic alerts on. Then, talk with engineers and see whether they use frameworks like mentioned here.
For further insight, and for lessons learned, click here for the full article.
Nov. 30, 2015 04:00 AM EST Reads: 600
Nov. 30, 2015 04:00 AM EST Reads: 344
Nov. 30, 2015 03:45 AM EST Reads: 433
Nov. 30, 2015 02:00 AM EST Reads: 448
Nov. 30, 2015 02:00 AM EST Reads: 446
Nov. 30, 2015 12:00 AM EST Reads: 449
Nov. 29, 2015 02:00 PM EST Reads: 488
Nov. 29, 2015 01:00 PM EST Reads: 358
Nov. 29, 2015 12:45 PM EST Reads: 423
Nov. 29, 2015 12:30 PM EST Reads: 427
Nov. 29, 2015 12:00 PM EST Reads: 529
Nov. 29, 2015 11:45 AM EST Reads: 333
Nov. 29, 2015 09:45 AM EST Reads: 455
Nov. 29, 2015 09:15 AM EST Reads: 347
Nov. 29, 2015 08:45 AM EST Reads: 234
Nov. 29, 2015 08:00 AM EST Reads: 282
Nov. 29, 2015 07:00 AM EST Reads: 502
Nov. 29, 2015 06:45 AM EST Reads: 746
Nov. 29, 2015 06:00 AM EST Reads: 561
Nov. 29, 2015 06:00 AM EST Reads: 377