Web Performance is a Journey, Not a Destination

Mehdi Daoudi

Subscribe to Mehdi Daoudi: eMailAlertsEmail Alerts
Get Mehdi Daoudi via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

Blog Feed Post

Clear Skies Ahead: Explaining and Contextualizing Apdex Values

Partly cloudy with scattered showers.

Apdex is like a basic weather report; it provides a general overview of current conditions, but it doesn’t tell you how many people are standing out in the rain. Typically, Apdex is used as an index to represent response time performance against user expectations. For web pages, this is a common barometer for indicating levels of user satisfaction on how quickly a page loads.

Like the weather report, it’s popular to criticize Apdex for being too imprecise, or not representing a complete view of the experience population. Rather than take that well-trodden approach, let’s instead look at how Apdex can be used productively without sacrificing its simplicity.

The basic Apdex calculation is straight-forward. Threshold values are used to separate response time samples into three zones: Satisfied, Tolerating, and Frustrated. The index is then calculated using a basic formula.

Apdex - Formula

This is best explained by example. Consider the sample web response times shown in the figure below. Users who experience response times less than the Tolerating threshold (T) are considered to be Satisfied, and their samples are counted in the Satisfied_count. Samples falling between T and the Frustrated threshold (F) are in the Tolerating_count. These are given half the weight of Satisfied samples (they’re divided by two). By default, the Frustrated threshold is set at F = 4T. Users experiencing response times greater than F are considered Frustrated, and aren’t counted in the Apdex numerator. When divided by the total number of samples, the result is a normalized index from zero to one.

Apdex - Bell curve

The success of this index lies in its simplicity. Once you define T, the Tolerating threshold, F is automatically set at four times higher. The obvious problem here is that while most users may be satisfied waiting three or four seconds for a web page to load, few will wait four times longer to ever reach the Frustrated threshold. Depending on the type of website being measured, a more representative threshold for frustration could be much lower than 4T.

In this case, a more generic implementation of Apdex should be used. Instead of automatically setting the Frustrated threshold to 4T, it can be manually set to a more appropriate value to allow for a more fine-tuned representation of user experience. In the example above, if a web page’s Tolerating threshold is set to three seconds, but experience shows that users abandon the site after seven seconds, F could be set to seven instead of the default 12 seconds.

A similar issue occurs when Apdex is applied more broadly, such as to server, database or other transaction metrics. In these cases, the Frustrated threshold cannot be assumed to be four times higher than the Tolerating threshold. The level of user satisfaction could be smaller or larger than 4T, and could vary by the application needs and goals of the organization itself. Employing a variable Tolerating zone (where F is set independently of T) will help support these needs, and is easy to implement.

Another issue for using Apdex is how to interpret the index itself. A decimal number between zero and one is not meaningful to many. Apdex does specify a set of ranges, shown below, but like the weather report, these are rather subjective.

Apdex - Ratings

Translating an index value to a simple verbal rating is a good idea, but certainly isn’t new. For instance, the NOAA National Weather Service uses “Partly Cloudy” when the sky is 3/8 to 5/8 covered by clouds. But just how helpful is it to say your webpage performance is Fair? It might be more productive to ask what percent of webpage loads were in the Frustrated zone, versus those in the Satisfied or Tolerating zones.

This can also be useful for those concerned with SLAs or reporting user experience to a less technical audience. For many, saying “95% of our test runs this month were satisfactory” is more meaningful than reporting how many were completed within three seconds, or that the Apdex rating was “Good.”

The Apdex rating can thus be used as an initial, high-level indicator of current conditions – and when more information is needed, Apdex zones can be further explored by charting each as individual metrics. This leverages the Apdex model while maintaining simplicity and providing more than just a one-word indication.

The figure below illustrates this point by showing webpage response times with corresponding Apdex and zone percentages for a major online retailer using synthetic testing. As webpage response times worsen, the Apdex rating drops from Excellent to Unacceptable. Looking further at the Apdex zones, the bottom chart shows the percent of Satisfied page loads drop to zero, while those in the Frustrated zone peak at 80%. This provides a better view into how the incident may affect actual user experience.

Apdex - Charts

Overall, if we want to use Apdex like a summary weather report, it has to be meaningful, easy to communicate, and serve as a starting point for further examination. Combined, these recommendations should help in achieving those goals, and without sacrificing the overall simplicity of the Apdex model.


ebook ecommerce holiday

The post Clear Skies Ahead: Explaining and Contextualizing Apdex Values appeared first on Catchpoint's Blog.

Read the original blog entry...

More Stories By Mehdi Daoudi

Catchpoint radically transforms the way businesses manage, monitor, and test the performance of online applications. Truly understand and improve user experience with clear visibility into complex, distributed online systems.

Founded in 2008 by four DoubleClick / Google executives with a passion for speed, reliability and overall better online experiences, Catchpoint has now become the most innovative provider of web performance testing and monitoring solutions. We are a team with expertise in designing, building, operating, scaling and monitoring highly transactional Internet services used by thousands of companies and impacting the experience of millions of users. Catchpoint is funded by top-tier venture capital firm, Battery Ventures, which has invested in category leaders such as Akamai, Omniture (Adobe Systems), Optimizely, Tealium, BazaarVoice, Marketo and many more.