Archive

Author Archive

Measuring the Performance of Your Web Apps

May 25th, 2015

You know that performance matters, right?

Just a few seconds slower and your site could be turning away thousands (or millions) of visitors. Don’t take my word for it: there are plenty of case studies, articles, findings, presentations, charts and more showing just how important it is to make your site load quickly. Google is even starting to shame-label slow sites. You don’t want to be that guy.

So how do you monitor and measure the performance of your web apps?

The performance of any system can be measured from several different points of view. Let’s take a brief look at three of the most common performance viewpoints for a web app: from the eyes of the developer, the server and the end-user.

This is the beginning of a series of articles that will expand upon the content given during my talk "Make it Fast: Using Modern Brower APIs to Monitor and Improve the Performance of your Web Applications" at CodeMash 2015.

Developer

The developer’s machine is the first line of defense in ensuring your web application is performing as intended. While developing your app, you are probably building, testing and addressing performance issues as you see them.

In addition to simply using your app, there are many tools you can use to measure how it’s performing. Some of my favorites are:

While ensuring everything is performing well on your development machine (which probably has tons of RAM, CPU and a quick connection to your servers) is a good first step, you also need to make sure your app is playing well with other services on your network, such as your web server, database, etc.

Server

Monitoring the server(s) that run your infrastructure (such as web, database, and other back-end services) is critical for a performance monitoring strategy. Many resources and tools have been developed to help engineers monitor what their servers are doing. Performance monitoring at the server level is critical for reliability (ensuring your core services are running) and scalability (ensuring your infrastructure is performing at the level you want).

From each of your servers’ points of view, there are several components that you can monitor to have visibility into how your infrastructure is performing. Some common monitoring and measuring tools are:

By putting these tools together, you can get a pretty good sense of how your overall infrastructure is performing.

End-user

So you’ve developed your app, deployed it to production, and have been monitoring your infrastructure closely to ensure all of your servers are performing smoothly.

Everything should be golden, right? Your end-users are having a fantastical experience and every one of them just loves visiting your site.

… clearly, that’s probably not the case. The majority of your end-users don’t surf the web on $3,000 development machines, using the latest cutting-edge browser on a low-latency link from your datacenter. A lot of your users are probably on a low-end tablet, on a cell network, 2,000 miles away from your datacenter.

The experience you’ve curated while developing your web app on your high-end development machine will probably be the best experience possible. All of your visitors will likely experience something worse, from not-a-noticeable-difference down to can’t-stand-how-slow-it-is-and-will-never-come-back.

Measuring performance from the server and the developer’s perspective is not the full story. In the end, the only thing that really matters is what your visitor sees, and the experience they have.

Just a few years ago, the web development community didn’t have a lot of tools available to monitor the performance from their end-users’ perspectives. Sure, you could capture simple JavaScript timestamps within your code:

var startTime = Date.now();
// do stuff
var elaspedTime = Date.now() - startTime;

You could spread this code throughout your app and listen for browser events such as onload, but simple timestamps don’t give a lot of visibility into the performance of your end-users.

In addition, since this style of timestamp/profiling is just JavaScript, you have zero visibility into the browser’s networking performance and what happened before the browser parsed your HTML and JavaScript.

W3C Webperf Working Group

To solve these issues, in 2010 the W3C (a standards body in charge of developing web standards such as HTML5, CSS, etc.) formed a new working group with the mission of giving developers the ability to assess and understand the performance characteristics of their web apps.

The W3C webperf working group is an organization whose members include Microsoft, Google, Mozilla, Opera, Facebook, Netflix, SOASTA and more. The working group collaboratively develops standards with the following goals:

  • Expose information that was not previously available
  • Give developers the tools they need to make their applications more efficient

  • Little to no overhead
  • Easy to understand APIs

Since it’s inception, the working group has published a number of standards, many of which are available in modern browsers today. Some of these standards are:

JavaScript Module Patterns

March 17th, 2015

Presented at the Lansing JavaScript Meetup:

javascript-module-patterns

Slides are available on Slideshare or Github.

Make It Fast – CodeMash 2015

January 8th, 2015

Presented at CodeMash 2015:

Make It Fast - CodeMash 2015

Compressing ResourceTiming

November 7th, 2014

At SOASTA, we’re building tools and services to help our customers understand and improve the performance of their websites. Our mPulse product utilizes Real User Monitoring to capture data about page-load performance.

For browser-side data collection, mPulse uses Boomerang, which beacons every single page-load experience back to our real time analytics engine. Boomerang utilizes NavigationTiming when possible to relay accurate performance metrics about the page load, such as the timings of DNS, TCP, SSL and the HTTP response.

ResourceTiming is another important feature in modern browsers that gives JavaScript access to performance metrics about the page’s components fetched from the network, such as CSS, JavaScript and images. mPulse will soon be releasing a new feature that lets our customers view the complete waterfall of every visitor’s session, which can be a tremendous help in debugging performance issues.

The challenge with ResourceTiming is that it offers a lot of data if you want to beacon it all back to a server. For each resource, there’s data on:

  • URL
  • Initiating element (eg IMG)
  • Start time
  • Duration
  • Plus 11 other timestamps

Here’s an example of performance.getEntriesByType('resource') of a single resource:

{"responseEnd":2436.426999978721,"responseStart":2435.966999968514,
"requestStart":2435.7460000319406,"secureConnectionStart":0,
"connectEnd":2434.203000040725,"connectStart":2434.203000040725,
"domainLookupEnd":2434.203000040725,"domainLookupStart":2434.203000040725,
"fetchStart":2434.203000040725,"redirectEnd":0,"redirectStart":0,
"initiatorType":"internal","duration":2.2239999379962683,
"startTime":2434.203000040725,"entryType":"resource","name":"http://nicj.net/"}

JSON.stringify()‘d, that’s 469 bytes for this one resource.  Multiple that by each resource on your page, and you can quickly see that gathering and beaconing all of this data back to a server will take a lot of bandwidth and storage if you’re tracking this for every single visitor to your site. The HTTP Archive tells us that the average page is composed of 99 HTTP resources, with an average URL length of 85 bytes.

So for a rough estimate you could expect around 45 KB of ResourceTiming data per page load.

The Goal

We wanted to find a way to compress this data before we JSON serialize it and beacon it back to our server.

Philip Tellis, the author of Boomerang, and I have come up with several compression techniques that can reduce the above data to about 15% of it’s original size.

Techniques

Let’s start out with a single resouce, as you get back from window.performance.getEntriesByType("resource"):

{  
  "responseEnd":323.1100000002698,
  "responseStart":300.5000000000000,
  "requestStart":252.68599999981234,
  "secureConnectionStart":0,
  "connectEnd":0,
  "connectStart":0,
  "domainLookupEnd":0,
  "domainLookupStart":0,
  "fetchStart":252.68599999981234,
  "redirectEnd":0,
  "redirectStart":0,
  "duration":71.42400000045745,
  "startTime":252.68599999981234,
  "entryType":"resource",
  "initiatorType":"script",
  "name":"http://foo.com/js/foo.js"
}

Step 1: Drop some attributes

We don’t need:

  • entryType will always be resource
  • duration can always be calculated as responseEnd - startTime.
  • fetchStart will always be startTime (with no redirects) or redirectEnd (with redirects)
{  
  "responseEnd":323.1100000002698,
  "responseStart":300.5000000000000,
  "requestStart":252.68599999981234,
  "secureConnectionStart":0,
  "connectEnd":0,
  "connectStart":0,
  "domainLookupEnd":0,
  "domainLookupStart":0,
  "redirectEnd":0,
  "redirectStart":0,
  "startTime":252.68599999981234,
  "initiatorType":"script",
  "name":"http://foo.com/js/foo.js"
}

Step 2: Change into a fixed-size array

Since we know all of the attributes ahead of time, we can change the object into a fixed-sized array. We’ll create a new object where each key is the URL, and its value is a fixed-sized array. We’ll take care of duplicate URLs later:

{ "name": [initiatorType, startTime, redirectStart, redirectEnd,
   domainLookupStart, domainLookupEnd, connectStart, secureConnectionStart, 
   connectEnd, requestStart, responseStart, responseEnd] }

With our data:

{ "http://foo.com/foo.js": ["script", 252.68599999981234, 0, 0
   0, 0, 0, 0, 
   0, 252.68599999981234, 300.5000000000000, 323.1100000002698] }

Step 3: Drop microsecond timings

For our purposes, we don’t need sub-milliscond accuracy, so we can round all timings to the nearest millisecond:

{ "http://foo.com/foo.js": ["script", 252, 0, 0, 0, 0, 0, 0, 0, 252, 300, 323] }

Step 4: Trie

We can now use an optimized Trie to compress the URLs. A Trie is an optimized tree structure where associative array keys are compressed.

Mark Holland and Mike McCall discussed this technique at Velocity this year.

Here’s an example with multiple resources:

{
    "http://": {
        "foo.com/": {
            "js/foo.js": ["script", 252, 0, 0, 0, 0, 0, 0, 0, 252, 300, 323]
            "css/foo.css": ["css", 300, 0, 0, 0, 0, 0, 0, 0, 305, 340, 500]
        },
        "other.com/other.css": [...]
    }
}

Step 5: Offset from startTime

If we offset all of the timestamps from startTime (which they should always be larger than), they may use fewer characters:

{
    "http://": {
        "foo.com/": {
            "js/foo.js": ["script", 252, 0, 0, 0, 0, 0, 0, 0, 0, 48, 71],
            "css/foo.css": ["script", 300, 0, 0, 0, 0, 0, 5, 40, 200]
        },
        "other.com/other.css": [...]
    }
}

Step 6: Reverse the timestamps and drop any trailing 0s

The only two required timestamps in ResourceTiming are startTime and responseEnd. Other timestamps may be zero due to being a Cross-Origin resource, or a timestamp that was “zero” because it didn’t take any time offset from startTime, such as domainLookupStart if DNS was already resolved.

If we re-order the timestamps so that, after startTime, we put them in reverse order, we’re more likely to have the “zero” timestamps at the end of the array.

{ "name": [initiatorType, startTime, responseEnd, responseStart,
   requestStart, connectEnd, secureConnectionStart, connectStart,
   domainLookupEnd, domainLookupStart, redirectEnd, redirectStart] }
{
    "http://": {
        "foo.com/": {
            "js/foo.js": ["script", 252, 71, 48, 0, 0, 0, 0, 0, 0, 0, 0, 0]
            "css/foo.css": ["script", 300, 200, 40, 5, 0, 0, 0, 0, 0, 0, 0, 0]
        }
    }
}

Once we have all of the zero timestamps towards the end of the array, we can drop any repeating trailing zeros. When reading later, missing array values can be interpreted as zero.

{
    "http://": {
        "foo.com/": {
            "js/foo.js": ["script", 252, 71, 48]
            "css/foo.css": ["css", 300, 200, 40]
        }
    }
}

Step 7: Convert initiatorType into a lookup

Using a numeric lookup instead of a string will save some bytes for initiatorType:

var INITIATOR_TYPES = {
    "other": 0,
    "img": 1,
    "link": 2,
    "script": 3,
    "css": 4,
    "xmlhttprequest": 5
};
{
    "http://": {
        "foo.com/": {
            "js/foo.js": [3, 252, 71, 48]
            "css/foo.css": [4, 300, 200, 40]
        }
    }
}

Step 8: Use Base36 for numbers

Base 36 is convenient because it can result in smaller byte-size than Base-10 and has built-in browser support in JavaScript toString(36):

{
    "http://": {
        "foo.com/": {
            "js/foo.js": [3, "70", "1z", "1c"]
            "css/foo.css": [4, "8c", "5k", "14"]
        }
    }
}

Step 9: Compact the array into a string

A JSON string representation of an array (separated by commas) saves a few bytes during serialization. We’ll designate the first byte as the initiatorType:

{
    "http://": {
        "foo.com/": {
            "js/foo.js": "370,1z,1c",
            "css/foo.css": "48c,5k,14"
        }
    }
}

Step 10: Multiple hits

Finally, if there are multiple hits to the same resource, the keys (URLs) in the Trie will conflict with each other.

Let’s fix this by concatenating multiple hits to the same URL via a special character such as pipe | (see foo.js below):

{
    "http://": {
        "foo.com/": {
            "js/foo.js": "370,1z,1c|390,1,2",
            "css/foo.css": "48c,5k,14"
        }
    }
}

Step 11: Gzip or MsgPack

Applying gzip compression or MsgPack can give additional savings during transport and storage.

Results

Overall, the above techniques compress raw JSON.stringify(performance.getEntriesByType('resource')) to about 15% of its original size.

Taking a few sample pages:

  • Search engine home page
    • Raw: 1,000 bytes
    • Compressed: 172 bytes
  • Questions and answers page:
    • Raw: 5,453 bytes
    • Compressed: 789 bytes
  • News home page
    • Raw: 32,480 bytes
    • Compressed: 4,949 bytes

How-To

These compression techniques have been added to the latest version of Boomerang.

I’ve also released a small library that does the compression as well as de-compression of the optimized result: resourcetiming-compression.js.

This article also appears on soasta.com.

Sails.js Intro

August 8th, 2014

Last night at GrNodeDev I gave a small presentation on Sails.js, an awesome Node.js web framework built on top of Express.

Slides are available on Slideshare and Github:

Sails.js Intro Slides