Measuring the Performance of Your Web Apps
You know that performance matters, right?
Just a few seconds slower and your site could be turning away thousands (or millions) of visitors. Don’t take my word for it: there are plenty of case studies, articles, findings, presentations, charts and more showing just how important it is to make your site load quickly. Google is even starting to shame-label slow sites. You don’t want to be that guy.
So how do you monitor and measure the performance of your web apps?
The performance of any system can be measured from several different points of view. Let’s take a brief look at three of the most common performance viewpoints for a web app: from the eyes of the developer, the server and the end-user.
This is the beginning of a series of articles that will expand upon the content given during my talk "Make it Fast: Using Modern Brower APIs to Monitor and Improve the Performance of your Web Applications" at CodeMash 2015.
Developer
The developer’s machine is the first line of defense in ensuring your web application is performing as intended. While developing your app, you are probably building, testing and addressing performance issues as you see them.
In addition to simply using your app, there are many tools you can use to measure how it’s performing. Some of my favorites are:
- Profiling, by using timestamps, XDebug, XHProf, and JetBrains dotTrace
- Browser developer tools, in IE, Chrome, Firefox, Opera and Safari
- Network monitoring, such as Fiddler, WireShark and tcpdump
While ensuring everything is performing well on your development machine (which probably has tons of RAM, CPU and a quick connection to your servers) is a good first step, you also need to make sure your app is playing well with other services on your network, such as your web server, database, etc.
Server
Monitoring the server(s) that run your infrastructure (such as web, database, and other back-end services) is critical for a performance monitoring strategy. Many resources and tools have been developed to help engineers monitor what their servers are doing. Performance monitoring at the server level is critical for reliability (ensuring your core services are running) and scalability (ensuring your infrastructure is performing at the level you want).
From each of your servers’ points of view, there are several components that you can monitor to have visibility into how your infrastructure is performing. Some common monitoring and measuring tools are:
- HTTP logs, such as apache, nginx, haproxy
- Server monitoring, such as top, iostat, vmstat, cacti, mrtg, nagios, New Relic APM
- Load testing, such as ab, jmeter, SOASTA CloudTest, BlazeMeter, HP Loadrunner
- Ops monitoring, such as Amazon CloudWatch, Datadog, CopperEgg
By putting these tools together, you can get a pretty good sense of how your overall infrastructure is performing.
End-user
So you’ve developed your app, deployed it to production, and have been monitoring your infrastructure closely to ensure all of your servers are performing smoothly.
Everything should be golden, right? Your end-users are having a fantastical experience and every one of them just loves visiting your site.
… clearly, that’s probably not the case. The majority of your end-users don’t surf the web on $3,000 development machines, using the latest cutting-edge browser on a low-latency link from your datacenter. A lot of your users are probably on a low-end tablet, on a cell network, 2,000 miles away from your datacenter.
The experience you’ve curated while developing your web app on your high-end development machine will probably be the best experience possible. All of your visitors will likely experience something worse, from not-a-noticeable-difference down to can’t-stand-how-slow-it-is-and-will-never-come-back.
Measuring performance from the server and the developer’s perspective is not the full story. In the end, the only thing that really matters is what your visitor sees, and the experience they have.
Just a few years ago, the web development community didn’t have a lot of tools available to monitor the performance from their end-users’ perspectives. Sure, you could capture simple JavaScript timestamps within your code:
var startTime = Date.now();
// do stuff
var elaspedTime = Date.now() - startTime;
You could spread this code throughout your app and listen for browser events such as onload, but simple timestamps don’t give a lot of visibility into the performance of your end-users.
In addition, since this style of timestamp/profiling is just JavaScript, you have zero visibility into the browser’s networking performance and what happened before the browser parsed your HTML and JavaScript.
W3C Webperf Working Group
To solve these issues, in 2010 the W3C (a standards body in charge of developing web standards such as HTML5, CSS, etc.) formed a new working group with the mission of giving developers the ability to assess and understand the performance characteristics of their web apps.
The W3C webperf working group is an organization whose members include Microsoft, Google, Mozilla, Opera, Facebook, Netflix, SOASTA and more. The working group collaboratively develops standards with the following goals:
- Expose information that was not previously available
Give developers the tools they need to make their applications more efficient
- Little to no overhead
- Easy to understand APIs
Since it’s inception, the working group has published a number of standards, many of which are available in modern browsers today. Some of these standards are: