Welcome!

Agile Computing Authors: Pat Romanski, Elizabeth White, Yeshim Deniz, Liz McMillan, Andy Thurai

Related Topics: @DevOpsSummit, Linux Containers, Agile Computing

@DevOpsSummit: Article

Revisiting the Anatomy of HTTP: Part I | @DevOpsSummit #APM #DevOps #WebPerf

A factor behind the various web/mobile performance initiatives is the fact that end-users’ tolerance for latency has nose-dived

Revisiting the Anatomy of HTTP: Part I
By Arun Kejariwal and Mehdi Daoudi

One of the key driving factors behind the various web/mobile performance initiatives is the fact that end-users’ tolerance for latency has nose-dived. Several studies have been published whereby it has been demonstrated that poor performance routinely impacts the bottom line, viz,. # users, # transactions, etc. Examples studies include this, this and this. There are several sources of performance bottlenecks, viz., but not limited to:

  • Large numbers of redirects
  • Increasing use of images and/or videos without supporting optimizations such as compression and form factor aware content delivery
  • Increasing use of JavaScript
  • Performance tax of using SSL (as discussed here and here)
  • Increasing use of third-party services which, in most cases, become the longest pole from a performance standpoint

Over five years back we had written a blog on the anatomy of HTTP. In the Internet era, five years is a long time. There has been a sea change across the board and hence we thought it was time to revisit the subject for the benefit of the community as a whole. The figure below shows the key components of a HTTP request.

Besides the independent components, key composite metrics are also annotated in the figure and are defined below.

Time to First Byte (TTFB): The time it took from the request being issued to receiving the first byte of data from the primary URL for the test(s). This is calculated as DNS + Connect + Send + Wait. For tests where the primary URL has a redirect chain, TTFB is calculated as the sum of the TTFB for each domain in the redirect chain.

Response: The time it took from the request being issued to the primary host server responding with the last byte of the primary URL of the test(s). For tests where the primary URL has a redirect chain, Response is calculated as the sum of the response time for each domain in the redirect chain.

Server Response: The time it took from when DNS was resolved to the server responding with the last byte of the primary URL of the test(s). This shows the server’s response exclusive of DNS times. For tests where the primary URL has a redirect chain, Server Response is calculated as the response time for each domain in the redirect chain minus the DNS time.

A plot from an example test illustrating the aforementioned metrics is show below:

Anomalies in any metric can be viewed and detected via Catchpoint’s portal in a very straightforward fashion. An example illustrating an anomaly in TWait is shown below:

Other composite metrics of interest include:

Render Start: The time it took the browser to start rendering the page.

Document Complete: Indicates that the browser has finished rendering the page. In Chrome it is equivalent to the browser onload event. In IE it occurs just before onload is fired and is triggered when the document readyState changes to “complete.”

Webpage Response: The time it took from the request being issued to receiving the last byte of the final element on the page.

  • For Web tests, the agent will wait for up to two seconds after Document Complete for no network activity to end the test
  • Webpage Response is impacted by script verbs for Transaction tests
  • For Object monitor tests, this value is equivalent to Response

Client Time: The time spent executing Javascript and CSS in the browser. Equal to the Webpage Response minus Wire Time

Wire Time: The time the agent took loading network requests. Equal to Webpage Response minus Client Time.

Content Load: The time it took to load the entire content of the webpage after connection was established with the server for the primary URL of the test(s), this is the time elapsed between the end of Send until the final element, or object, on the page is loaded. Content Load does not include the DNS, Connect, Send, and SSL time on the primary URL (or any redirects of the primary URL). For Object monitor tests, this value is equivalent to Load.

In the context of mobile performance, a key metric which is very often the target of optimization is Above-The-Fold (ATF) time. The importance of ATF stems from the fact that while the user is interpreting the first page of content, the rest of the page can be delivered progressively in the background. A threshold of one second is often used as the target for ATF. Practically, after subtracting the network latency, the performance budget is about 400 milliseconds for the following: server must render the response, client-side application code must execute, and the browser must layout and render the content. For recommendations on how to optimize mobile websites, the reader is referred to this and this.

A key difference between the figure above and the corresponding figure in the previous blog is the presence of the TSSL component. Unencrypted communication – via HTTP and other protocols – is susceptible to interception, manipulation, and impersonation, and can potentially reveal users’ credentials, history, identity, and other sensitive information. In the post-Snowden era, privacy has been in limelight. Leading companies such as Google, Microsoft, Apple, and Yahoo have embraced HTTPS for most of their services and have also encrypted their traffic between their data centers. Delivering data over HTTPS has the following benefits:

  • HTTPS protects the integrity of the website by preventing intruders from tampering with exchanged data, e.g., rewriting content, injecting unwanted and malicious content, and so on.
  • HTTPS protects the privacy and security of the user by preventing intruders from listening in on the exchanged data. Each unencrypted request can potentially reveal sensitive information about the user, and when such data is aggregated across many sessions, can be used to de-anonymize identities and reveal other sensitive information. All browsing activity, as far as the user is concerned, should be considered private and sensitive.
  • HTTPS enables powerful features on the web such as accessing users’ geolocation, taking pictures, recording video, enabling offline app experiences, and more, requiring explicit user opt-in that, in turn, requires HTTPS.

When the SSL protocol was standardized by the IETF, it was renamed to Transport Layer Security (TLS). TLS was designed to operate on top of a reliable transport protocol such as TCP. The TLS protocol is designed to provide the following three essential services to all applications running above it:

  • Encryption: A mechanism to obfuscate what is sent from one host to another
  • Authentication: A mechanism to verify the validity of provided identification material
  • Integrity: A mechanism to detect message tampering and forgery

Technically, one is not required to use all three in every situation. For instance, one may decide to accept a certificate without validating its authenticity; having said that, one should be well aware of the security risks and implications of doing so. In practice, a secure web application will leverage all three services.

In order to establish a cryptographically secure data channel, both the sender and receiver of a connection must agree on which ciphersuites will be used and the keys used to encrypt the data. The TLS protocol specifies a well-defined handshake sequence (illustrated below) to perform this exchange.

TLS Handshake. Note that the figure assumes the same (optimistic) 28 millisecond one-way “light in fiber” delay between New York and London. (source: click here)

As part of the TLS handshake, the protocol allows both the sender and the receiver to authenticate their identities. When used in the browser, this authentication mechanism allows the client to verify that the server is who it claims to be (e.g., a payment website) and not someone simply pretending to be the destination by spoofing its name or IP address. Likewise, the server can also optionally verify the identity of the client — e.g., a company proxy server can authenticate all employees, each of whom could have their own unique certificate signed by the company. Finally, the TLS protocol also provides its own message framing mechanism and signs each message with a message authentication code (MAC). The MAC algorithm is a one-way cryptographic hash function (effectively a checksum), the keys to which are negotiated by both the sender and the receiver. Whenever a TLS record is sent, a MAC value is generated and appended for that message, and the receiver is then able to compute and verify the sent MAC value to ensure message integrity and authenticity.

From the figure above we note that TLS connections require two roundtrips for a “full handshake” and thus have an adverse impact on performance. The plot below illustrates the comparative Avg TSSL (the data was obtained via Catchpoint) for the major airlines:

However, in practice, optimized deployments can do much better and deliver a consistent 1-RTT TLS handshake. In particular:

  • False Start – a TLS protocol extension – can be used to allow the client and server to start transmitting encrypted application data when the handshake is only partially complete - i.e., onceChangeCipherSpec and Finished messages are sent, but without waiting for the other side to do the same. This optimization reduces handshake overhead for new TLS connections to one round trip.
  • If the client has previously communicated with the server, then an “abbreviated handshake” can be used, which requires one roundtrip and also allows the client and server to reduce the CPU overhead by reusing the previously negotiated parameters for the secure session.

The combination of both of the above optimizations allows us to deliver a consistent 1-RTT TLS handshake for new and returning visitors and facilitates computational savings for sessions that can be resumed based on previously negotiated session parameters. Other ways to minimize the performance impact of HTTPS include:

  • Use of HTTP Strict Transport Security (HSTS) that restricts web browsers to access web servers solely over HTTPS. This mitigates performance impact by eliminating unnecessary HTTP-to-HTTPS redirects. This responsibility is shifted to the client which will automatically rewrite all links to HTTPS.
  • Early Termination helps minimize latency due to TLS handshake.
  • Use of compression algorithms such as HPACK, Brotli.

For further discussion on SSL/TLS, the reader is referred to the paper titled, “Anatomy and Performance of SSL Processing” by Zhao et al. and the paper titled, “Analysis and Comparison of Several Algorithms in SSL/TLS Handshake Protocol” by Qing and Yaping.

The post Revisiting the Anatomy of HTTP: Part I appeared first on Catchpoint's Blog.

More Stories By Mehdi Daoudi

Catchpoint radically transforms the way businesses manage, monitor, and test the performance of online applications. Truly understand and improve user experience with clear visibility into complex, distributed online systems.

Founded in 2008 by four DoubleClick / Google executives with a passion for speed, reliability and overall better online experiences, Catchpoint has now become the most innovative provider of web performance testing and monitoring solutions. We are a team with expertise in designing, building, operating, scaling and monitoring highly transactional Internet services used by thousands of companies and impacting the experience of millions of users. Catchpoint is funded by top-tier venture capital firm, Battery Ventures, which has invested in category leaders such as Akamai, Omniture (Adobe Systems), Optimizely, Tealium, BazaarVoice, Marketo and many more.

IoT & Smart Cities Stories
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
DXWorldEXPO LLC announced today that Ed Featherston has been named the "Tech Chair" of "FinTechEXPO - New York Blockchain Event" of CloudEXPO's 10-Year Anniversary Event which will take place on November 12-13, 2018 in New York City. CloudEXPO | DXWorldEXPO New York will present keynotes, general sessions, and more than 20 blockchain sessions by leading FinTech experts.
Apps and devices shouldn't stop working when there's limited or no network connectivity. Learn how to bring data stored in a cloud database to the edge of the network (and back again) whenever an Internet connection is available. In his session at 17th Cloud Expo, Ben Perlmutter, a Sales Engineer with IBM Cloudant, demonstrated techniques for replicating cloud databases with devices in order to build offline-first mobile or Internet of Things (IoT) apps that can provide a better, faster user e...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of ...
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
Rodrigo Coutinho is part of OutSystems' founders' team and currently the Head of Product Design. He provides a cross-functional role where he supports Product Management in defining the positioning and direction of the Agile Platform, while at the same time promoting model-based development and new techniques to deliver applications in the cloud.
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and Bi...
Cell networks have the advantage of long-range communications, reaching an estimated 90% of the world. But cell networks such as 2G, 3G and LTE consume lots of power and were designed for connecting people. They are not optimized for low- or battery-powered devices or for IoT applications with infrequently transmitted data. Cell IoT modules that support narrow-band IoT and 4G cell networks will enable cell connectivity, device management, and app enablement for low-power wide-area network IoT. B...