Compressing UserTiming

December 10th, 2015

UserTiming is a modern browser performance API that gives developers the ability the mark important events (timestamps) and measure durations (timestamp deltas) in their web apps. For an in-depth overview of how UserTiming works, you can see my article UserTiming in Practice or read Steve Souders’ excellent post with several examples for how to use UserTiming to measure your app.

UserTiming is very simple to use. Let’s do a brief review. If you want to mark an important event, just call window.performance.mark(markName):

// log the beginning of our task

You can call .mark() as many times as you want, with whatever markName you want. You can repeat the same markName as well.

The data is stored in the PerformanceTimeline. You query the PerformanceTimeline via methods like performance.getEntriesByName(markName):

// get the data back
var entry = performance.getEntriesByName("start");
// -> {"name": "start", "entryType": "mark", "startTime": 1, "duration": 0}

Pretty simple right? Again, see Steve’s article for some great use cases.

So let’s imagine you’re sold on using UserTiming. You start instrumenting you website, placing marks and measures throughout the life-cycle of your app. Now what?

The data isn’t useful unless you’re looking at it. On your own machine, you can query the PerformanceTimeline and see marks and measures in the browser developer tools. There are also third party services that give you a view of your UserTiming data.

What if you want to gather the data yourself? What if you’re interested in trending different marks or measures in your own analytics tools?

The easy approach is to simply fetch all of the marks and measures via performance.getEntriesByType(), stringify the JSON, and XHR it back to your stats engine.

But how big is that data?

Let’s look at some example data — this was captured from a website I was browsing:


That’s 657 bytes for just 7 marks. What if you want to log dozens, hundreds, or even thousands of important events on your page? What if you have a Single Page App, where the user can generate many events over the lifetime of their session?

Clearly, we can do better. The signal : noise ratio of stringified JSON isn’t that good. As a performance-conscientious developer, we should strive to minimize our visitor’s upstream bandwidth usage when sending our analytics packets.

Let’s see what we can do.

The Goal

Our goal is to reduce the size of an array of marks and measures down to a data structure that’s as small as possible so that we’re only left with a minimal payload that can be quickly beacon’d to a server for aggregate analysis.

For a similar domain-specific compression technique for ResourceTiming data, please see my post on Compressing ResourceTiming. The techniques we will discuss for UserTiming will build on some of the same things we can do for ResourceTiming data.

An additional goal is that we’re going to stick with techniques where the resulting compressed data doesn’t expand from URL encoding if used in a query-string parameter. This makes it easy to just tack on the data to an existing analytics or Real-User-Monitoring (RUM) beacon.

The Approach

There are two main areas of our data-structure that we can compress. Let’s take a single measure as an example:

    "name":      "measureName",
    "entryType": "measure",
    "startTime": 2289.4089999972493,
    "duration":  100.12314141 

What data is important here? Each mark and measure has 4 attributes:

  1. Its name
  2. Whether it’s a mark or a measure
  3. Its start time
  4. Its duration (for marks, this is 0)

I’m going to suggest we can break these down into two main areas: The object and its payload. The object is simply the mark or measure’s name. The payload is its start time, and if its a measure, it’s duration. A duration implies that the object is a measure, so we don’t need to track that attribute independently.

Essentially, we can break up our UserTiming data into a key-value pair. Grouping by the mark or measure name let’s us play some interesting games, so the name will be the key. The value (payload) will be the list of start times and durations for each mark or measure name.

First, we’ll compress the payload (all of the timestamps and durations). Then, we can compress the list of objects.

So, let’s start out by compressing the timestamps!

Compressing the Timestamps

The first thing we want to compress for each mark or measure are its timestamps.

To begin with, startTime and duration are in millisecond resolution, with microseconds in the fraction. Most people probably don’t need microsecond resolution, and it adds a ton of byte size to the payload. A startTime of 2289.4089999972493 can probably be compressed down to just 2,289 milliseconds without sacrificing much accuracy.

So let’s say we have 3 marks to begin with:


Grouping by mark name, we can reduce this structure to an array of start times for each mark:

{ "mark1": [100, 150, 500] }

One of the truths of UserTiming is that when you fetch the entries via performance.getEntries(), they are in sorted order.

Let’s use this to our advantage, by offsetting each timestamp by the one in front of it. For example, the 150 timestamp is only 50ms away from the 100 timestamp before it, so its value can be instead set to 50. 500 is 350ms away from 150, so it gets set to 350. We end up with smaller integers this way, which will make compression easier later:

{ "mark1": [100, 50, 350] }

How can we compress the numbers further? Remember, one goal is to make the resulting data transmit easier on a URL (query string), so we mostly want to use the ASCII alpha-numeric set of characters.

One really easy way of reducing the number of bytes taken by a number in JavaScript is using Base-36 encoding. In other words, 0=0, 10=a, 35=z. Even better, JavaScript has this built-in to Integer.toString(36):

(35).toString(36)          == "z" (saves 1 character)
(99999999999).toString(36) == "19xtf1tr" (saves 3 characters)

Once we Base-36 encode all of our offset timestamps, we’re left with a smaller number of characters:

{ "mark1": ["2s", "1e", "9q"] }

Now that we have these timestamps offsets in Base-36, we can combine (join) them into a single string so they’re easily transmitted. We should avoid using the comma character (,), as it is one of the reserved characters of the URI spec (RFC 3986), so it will be escaped to %2C.

The list of non-URI-encoded characters is pretty small:

[0-9a-zA-Z] $ - _ . + ! * ' ( )

The period (.) looks a lot like a comma, so let’s go with that. Applying a simple Array.join("."), we get:

{ "mark1": "2s.1e.9q" }

So we’re really starting to reduce the byte size of these timestamps. But wait, there’s more we can do!

Let’s say we have some timestamps that came in at a semi-regular interval:


Compressed down, we get:

{ "mark1": "2s.2s.2s" }

Why should we repeat ourselves?

Let’s use one of the other non-URI-encoded characters, the asterisk (*), to note when a timestamp offset repeats itself:

  • A single * means it repeated twice
  • *[n] means it repeated n times.

So the above timestamps can be compressed further to:

{ "mark1": "2s*3" }

Obviously, this compression depends on the application’s characteristics, but periodic marks can be seen in the wild.


What about measures? Measures have the additional data component of a duration. For marks these are always 0 (you’re just logging a point in time), but durations are another millisecond attribute.

We can adapt our previous string to include durations, if available. We can even mix marks and measures of the same name and not get confused later.

Let’s use this data set as an example. One mark and two measures (sharing the same name):


Instead of an array of Base36-encoded offset timestamps, we need to include a duration, if available. Picking another non-URI-encoded character, the under-bar (_), we can easily “tack” this information on to the end of each startTime.

For example, with a startTime of 150 (1e in Base-36) and a duration of 100 (2s in Base-36), we get a simple string of 1e_2s.

Combining the above marks and measures, we get:

{ "foo": "2s.1e_2s.9q_5k" }

Later, when we’re decoding this, we haven’t lost track of the fact that there are both marks and measures intermixed here, since only measures have durations.

Going back to our original example:


Let’s compare that JSON string to how we’ve compressed it (still in JSON form, which isn’t very URI-friendly):


198 bytes originally versus 21 bytes with just the above techniques, or about 10% of the original size.

Not bad so far.

Compressing the Array

Most sites won’t have just a single mark or measure name that they want to transmit. Most sites using UserTiming will have many different mark/measure names and values.

We’ve compressed the actual timestamps to a pretty small (URI-friendly) value, but what happens when we need to transmit an array of different marks/measures and their respective timestamps?

Let’s pretend there are 3 marks and 3 measures on the page, each with one timestamp. After applying timestamp compression, we’re left with:

    "mark1": "2s",
    "mark2": "5k",
    "mark3": "8c",
    "measure1": "2s_2s",
    "measure2": "5k_5k",
    "measure3": "8c_8c"

There are several ways we can compress this data to a format suitable for URL transmission. Let’s explore.

Using an Array

Remember, JSON is not URI friendly, mostly due to curly braces ({ }), quotes (") and colons (:) having to be escaped.

Even in a minified JSON form:

(98 bytes)

This is what it looks like after URI encoding:

(174 bytes)

Gah! That’s almost 77% overhead.

Since we have a list of known keys (names) and values, we could instead change this object into an “array” where we’re not using { } " : characters to delimit things.

Let’s use another URI-friendly character, the tilde (~), to separate each. Here’s what the format could look like:


Using our data:

(73 bytes)

Note that this depends on your names not including a tilde, or, you can pre-escape tildes in names to %7E.

Using an Optimized Trie

That’s one way of compressing the data. In some cases, we can do better, especially if your names look similar.

One great technique we used in compressing ResourceTiming data is an optimized Trie. Essentially, you can compress strings anytime one is a prefix of another.

In our example above, mark1, mark2 and mark3 are perfect candidates, since they all have a stem of "mark". In optimized Trie form, our above data would look something closer to:

    "mark": {
        "1": "2s",
        "2": "5k",
        "3": "8c"
    "measure": {
        "1": "2s_2s",
        "2": "5k_5k",
        "3": "8c_8c"

Minified, this is 13% smaller than the original non-Trie data:

(86 bytes)

However, this is not as easily compressible into a tilde-separated array, since it’s no longer a flat data structure.

There’s actually a great way to compress this JSON data for URL-transmission, called JSURL. Basically, the JSURL replaces all non-URI-friendly characters with a better URI-friendly representation. Here’s what the above JSON looks like regular URI-encoded:

(185 bytes)

Versus JSURL encoded:

(67 bytes)

This JSURL encoding of an optimized Trie reduces the bytes size by 10% versus a tilde-separated array.

Using a Map

Finally, if you know what your mark / measure names will be ahead of time, you may not need to transmit the actual names at all. If the set of your names is finite, and could maintain a map of name : index pairs, and only have to transmit the indexed value for each name.

Using the 3 marks and measures from before:

    "mark1": "2s",
    "mark2": "5k",
    "mark3": "8c",
    "measure1": "2s_2s",
    "measure2": "5k_5k",
    "measure3": "8c_8c"

What if we simply mapped these names to numbers 0-5:

    "mark1": 0,
    "mark2": 1,
    "mark3": 2,
    "measure1": 3,
    "measure2": 4,
    "measure3": 5

Since we no longer have to compress names via a Trie, we can go back to an optimized array. And since the size of the index is relatively small (values 0-35 fit into a single character), we can save some room by not having a dedicated character (~) that separates each index and value (timestamps).

Taking the above example, we can have each name fit into a string in this format:


Using our data:

(32 bytes)

This structure is less than half the size of the optimized Trie (JSURL encoded).

If you have over 36 mapped name : index pairs, we can still accommodate them in this structure. Remember, at value 36 (the 37th value from 0), (36).toString(36) == 10, taking two characters. We can’t just use an index of two characters, since our assumption above is that the index is only a single character.

One way of dealing with this is by adding a special encoding if the index is over a certain value. We’ll optimize the structure to assume you’re only going to use 36 values, but, if you have over 36, we can accommodate that as well. For example, let’s use one of the final non-URI-encoded characters we have left over, the dash (-):

If the first character of an item in the array is:

  • 0-z (index values 0 – 35), that is the index value
  • -, the next two characters are the index (plus 36)

Thus, the value 0 is encoded as 0, 35 is encoded as z, 36 is encoded as -00, and 1331 is encoded as -zz. This gives us a total of 1331 mapped values we can use, all using a single or 3 characters.

So, given compressed values of:

    "mark1": "2s",
    "mark2": "5k",
    "mark3": "8c"

And a mapping of:

    "mark1": 36,
    "mark2": 37,
    "mark3": 1331

You could compress this as:


We now have 3 different ways of compressing our array of marks and measures.

We can even swap between them, depending on which compresses the best each time we gather UserTiming data.

Test Cases

So how do these techniques apply to some real-world (and concocted) data?

I navigated around the Alexa Top 50 (by traffic) websites, to see who’s using UserTiming (not many). I gathered any examples I could, and created some of my own test cases as well. With this, I currently have a corpus of 20 real and fake UserTiming examples.

Let’s first compare JSON.stringify() of our UserTiming data versus the culmination of all of the techniques above:

¦ Test    ¦ JSON ¦ UTC ¦ UTC % ¦
¦ 01.json ¦ 415  ¦ 66  ¦ 16%   ¦
¦ 02.json ¦ 196  ¦ 11  ¦ 6%    ¦
¦ 03.json ¦ 521  ¦ 18  ¦ 3%    ¦
¦ 04.json ¦ 217  ¦ 36  ¦ 17%   ¦
¦ 05.json ¦ 364  ¦ 66  ¦ 18%   ¦
¦ 06.json ¦ 334  ¦ 43  ¦ 13%   ¦
¦ 07.json ¦ 460  ¦ 43  ¦ 9%    ¦
¦ 08.json ¦ 91   ¦ 20  ¦ 22%   ¦
¦ 09.json ¦ 749  ¦ 63  ¦ 8%    ¦
¦ 10.json ¦ 103  ¦ 32  ¦ 31%   ¦
¦ 11.json ¦ 231  ¦ 20  ¦ 9%    ¦
¦ 12.json ¦ 232  ¦ 19  ¦ 8%    ¦
¦ 13.json ¦ 172  ¦ 34  ¦ 20%   ¦
¦ 14.json ¦ 658  ¦ 145 ¦ 22%   ¦
¦ 15.json ¦ 89   ¦ 48  ¦ 54%   ¦
¦ 16.json ¦ 415  ¦ 33  ¦ 8%    ¦
¦ 17.json ¦ 196  ¦ 18  ¦ 9%    ¦
¦ 18.json ¦ 196  ¦ 8   ¦ 4%    ¦
¦ 19.json ¦ 228  ¦ 50  ¦ 22%   ¦
¦ 20.json ¦ 651  ¦ 38  ¦ 6%    ¦
¦ Total   ¦ 6518 ¦ 811 ¦ 12%   ¦

* JSON      = JSON.stringify(UserTiming).length (bytes)
* UTC       = Applying UserTimingCompression (bytes)
* UTC %     = UTC bytes / JSON bytes

Pretty good, right? On average, we shrink the data down to about 12% of its original size.

In addition, the resulting data is now URL-friendly.


usertiming-compression.js (and its companion, usertiming-decompression.js) are open-source JavaScript modules (UserTimingCompression and UserTimingDecompression) that apply all of the techniques above.

They are available on Github at

These scripts are meant to provide an easy, drop-in way of compressing your UserTiming data. They compress UserTiming via one of the methods listed above, depending on which way compresses best.

If you have intimate knowledge of your UserTiming marks, measures and how they’re organized, you could probably construct an even more optimized data structure for capturing and transmitting your UserTiming data. You could also trim the scripts to only use the compression technique that works best for you.

Versus Gzip / Deflate

Wait, why did we go through all of this mumbo-jumbo when there are already great ways of compression data? Why not just gzip the stringified JSON?

That’s one approach. One challenge is there isn’t native support for gzip in JavaScript. Thankfully, you can use one of the excellent open-source libraries like pako.

Let’s compare the UserTimingCompression techniques to gzipping the raw UserTiming JSON:

¦ Test    ¦ JSON ¦ UTC ¦ UTC % ¦ JSON.gz ¦ JSON.gz % ¦
¦ 01.json ¦ 415  ¦ 66  ¦ 16%   ¦ 114     ¦ 27%       ¦
¦ 02.json ¦ 196  ¦ 11  ¦ 6%    ¦ 74      ¦ 38%       ¦
¦ 03.json ¦ 521  ¦ 18  ¦ 3%    ¦ 79      ¦ 15%       ¦
¦ 04.json ¦ 217  ¦ 36  ¦ 17%   ¦ 92      ¦ 42%       ¦
¦ 05.json ¦ 364  ¦ 66  ¦ 18%   ¦ 102     ¦ 28%       ¦
¦ 06.json ¦ 334  ¦ 43  ¦ 13%   ¦ 96      ¦ 29%       ¦
¦ 07.json ¦ 460  ¦ 43  ¦ 9%    ¦ 158     ¦ 34%       ¦
¦ 08.json ¦ 91   ¦ 20  ¦ 22%   ¦ 88      ¦ 97%       ¦
¦ 09.json ¦ 749  ¦ 63  ¦ 8%    ¦ 195     ¦ 26%       ¦
¦ 10.json ¦ 103  ¦ 32  ¦ 31%   ¦ 102     ¦ 99%       ¦
¦ 11.json ¦ 231  ¦ 20  ¦ 9%    ¦ 120     ¦ 52%       ¦
¦ 12.json ¦ 232  ¦ 19  ¦ 8%    ¦ 123     ¦ 53%       ¦
¦ 13.json ¦ 172  ¦ 34  ¦ 20%   ¦ 112     ¦ 65%       ¦
¦ 14.json ¦ 658  ¦ 145 ¦ 22%   ¦ 217     ¦ 33%       ¦
¦ 15.json ¦ 89   ¦ 48  ¦ 54%   ¦ 91      ¦ 102%      ¦
¦ 16.json ¦ 415  ¦ 33  ¦ 8%    ¦ 114     ¦ 27%       ¦
¦ 17.json ¦ 196  ¦ 18  ¦ 9%    ¦ 81      ¦ 41%       ¦
¦ 18.json ¦ 196  ¦ 8   ¦ 4%    ¦ 74      ¦ 38%       ¦
¦ 19.json ¦ 228  ¦ 50  ¦ 22%   ¦ 103     ¦ 45%       ¦
¦ 20.json ¦ 651  ¦ 38  ¦ 6%    ¦ 115     ¦ 18%       ¦
¦ Total   ¦ 6518 ¦ 811 ¦ 12%   ¦ 2250    ¦ 35%       ¦

* JSON      = JSON.stringify(UserTiming).length (bytes)
* UTC       = Applying UserTimingCompression (bytes)
* UTC %     = UTC bytes / JSON bytes
* JSON.gz   = gzip(JSON.stringify(UserTiming)).length
* JSON.gz % = JSON.gz bytes / JSON bytes

As you can see, gzip does a pretty good job of compressing raw JSON (stringified) – on average, reducing the size of to 35% of the original. However, UserTimingCompression does a much better job, reducing to 12% of overall size.

What if instead of gzipping the UserTiming JSON, we gzip the minified timestamp map? For example, instead of:


What if we gzipped the output of compressing the timestamps?


Here are the results:

¦ Test    ¦ UTC ¦ UTC.gz ¦ UTC.gz % ¦
¦ 01.json ¦ 66  ¦ 62     ¦ 94%      ¦
¦ 02.json ¦ 11  ¦ 24     ¦ 218%     ¦
¦ 03.json ¦ 18  ¦ 28     ¦ 156%     ¦
¦ 04.json ¦ 36  ¦ 46     ¦ 128%     ¦
¦ 05.json ¦ 66  ¦ 58     ¦ 88%      ¦
¦ 06.json ¦ 43  ¦ 43     ¦ 100%     ¦
¦ 07.json ¦ 43  ¦ 60     ¦ 140%     ¦
¦ 08.json ¦ 20  ¦ 33     ¦ 165%     ¦
¦ 09.json ¦ 63  ¦ 76     ¦ 121%     ¦
¦ 10.json ¦ 32  ¦ 45     ¦ 141%     ¦
¦ 11.json ¦ 20  ¦ 37     ¦ 185%     ¦
¦ 12.json ¦ 19  ¦ 35     ¦ 184%     ¦
¦ 13.json ¦ 34  ¦ 40     ¦ 118%     ¦
¦ 14.json ¦ 145 ¦ 112    ¦ 77%      ¦
¦ 15.json ¦ 48  ¦ 45     ¦ 94%      ¦
¦ 16.json ¦ 33  ¦ 50     ¦ 152%     ¦
¦ 17.json ¦ 18  ¦ 37     ¦ 206%     ¦
¦ 18.json ¦ 8   ¦ 23     ¦ 288%     ¦
¦ 19.json ¦ 50  ¦ 53     ¦ 106%     ¦
¦ 20.json ¦ 38  ¦ 51     ¦ 134%     ¦
¦ Total   ¦ 811 ¦ 958    ¦ 118%     ¦

* UTC     = Applying full UserTimingCompression (bytes)
* TS.gz   = gzip(UTC timestamp compression).length
* TS.gz % = TS.gz bytes / UTC bytes

Even with pre-applying the timestamp compression and gzipping the result, gzip doesn’t beat the full UserTimingCompression techniques. Here, in general, gzip is 18% larger than UserTimingCompression. There are a few cases where gzip is better, notably in test cases with a lot of repeating strings.

Additionally, applying gzip requires your app include a JavaScript gzip library, like pako — whose deflate code is currently around 26.3 KB minified. usertiming-compression.js is much smaller, at only 3.9 KB minified.

Finally, if you’re using gzip compression, you can’t just stick the gzip data into a Query String, as URL encoding will increase its size tremendously.

If you’re already using gzip to compress data, it’s a decent choice, but applying some domain-specific knowledge about our data-structures give us better compression in most cases.

Versus MessagePack

MessagePack is another interesting choice for compressing data. In fact, its motto is “It’s like JSON. but fast and small.“. I like MessagePack and use it for other projects. MessagePack is an efficient binary serialization format that takes JSON input and distills it down to a minimal form. It works with any JSON data structure, and is very portable.

How does MessagePack compare to the UserTiming compression techniques?

MessagePack only compresses the original UserTiming JSON to 72% of its original size. Great for a general compression library, but not nearly as good as UserTimingCompression can do. Notably, this is because MessagePack is retaining the JSON strings (e.g. startTime, duration, etc) for each UserTiming object:

¦         ¦ JSON ¦ UTC ¦ UTC % ¦ JSON.pack ¦ JSON.pack % ¦
¦ Total   ¦ 6518 ¦ 811 ¦ 12%   ¦ 4718      ¦ 72%         ¦

* UTC         = Applying UserTimingCompression (bytes)
* UTC %       = UTC bytes / JSON bytes
* JSON.pack   = MsgPack(JSON.stringify(UserTiming)).length
* JSON.pack % = TS.pack bytes / UTC bytes

What if we just MessagePack the compressed timestamps? (e.g. {"mark1":"2s.1e.9q", ...})

¦ Test    ¦ UTC ¦ TS.pack  ¦ TS.pack %  ¦
¦ 01.json ¦ 66  ¦ 73       ¦ 111%       ¦
¦ 02.json ¦ 11  ¦ 12       ¦ 109%       ¦
¦ 03.json ¦ 18  ¦ 19       ¦ 106%       ¦
¦ 04.json ¦ 36  ¦ 43       ¦ 119%       ¦
¦ 05.json ¦ 66  ¦ 76       ¦ 115%       ¦
¦ 06.json ¦ 43  ¦ 44       ¦ 102%       ¦
¦ 07.json ¦ 43  ¦ 43       ¦ 100%       ¦
¦ 08.json ¦ 20  ¦ 21       ¦ 105%       ¦
¦ 09.json ¦ 63  ¦ 63       ¦ 100%       ¦
¦ 10.json ¦ 32  ¦ 33       ¦ 103%       ¦
¦ 11.json ¦ 20  ¦ 21       ¦ 105%       ¦
¦ 12.json ¦ 19  ¦ 20       ¦ 105%       ¦
¦ 13.json ¦ 34  ¦ 33       ¦ 97%        ¦
¦ 14.json ¦ 145 ¦ 171      ¦ 118%       ¦
¦ 15.json ¦ 48  ¦ 31       ¦ 65%        ¦
¦ 16.json ¦ 33  ¦ 40       ¦ 121%       ¦
¦ 17.json ¦ 18  ¦ 21       ¦ 117%       ¦
¦ 18.json ¦ 8   ¦ 11       ¦ 138%       ¦
¦ 19.json ¦ 50  ¦ 52       ¦ 104%       ¦
¦ 20.json ¦ 38  ¦ 40       ¦ 105%       ¦
¦ Total   ¦ 811 ¦ 867      ¦ 107%       ¦

* UTC       = Applying full UserTimingCompression (bytes)
* TS.pack   = MsgPack(UTC timestamp compression).length
* TS.pack % = TS.pack bytes / UTC bytes

For our 20 test cases, MessagePack is about 7% larger than the UserTiming compression techniques.

Like using a JavaScript module for gzip, the most popular MessagePack JavaScript modules are pretty hefty, at 29.2 KB, 36.9 KB, and 104 KB. Compare this to only 3.9 KB minified for usertiming-compression.js.

Basically, if you have good domain-specific knowledge of your data-structures, you can often compress better than a general-case minimizer like gzip or MessagePack.


It’s fun taking a data-structure you want to work with and compressing it down as much as possible.

UserTiming is a great (and under-utilized) browser API that I hope to see get adopted more. If you’re already using UserTiming, you might have already solved the issue of how to capture, transmit and store these datapoints. If not, I hope these techniques and tools will help you on your way towards using the API.

Do you have ideas for how to compress this data down even further? Let me know!

usertiming-compression.js (and usertiming-decompression.js) are available on Github.


Forensic Tools for In-Depth Performance Investigations

October 15th, 2015

Another talk Philip Tellis and I gave at Velocity New York 2015 was about the forensic tools we use for investigating performance issues.  Check it out on Slideshare:


In this talk, we cover a variety of tools such as WebPagetest, tcpdump, Wireshark, Cloudshark, browser developer tools, Chrome tracing, netlog, Fiddler, RUM, TamperMonkey, NodeJS, virtualization, Event Tracing for Windows (ETW), xperf and more while diving into real issues we’ve had to investigate in the past.

The talk is also available on YouTube.

Measuring the Performance of Single Page Applications

October 15th, 2015

Philip Tellis and I recently gave this talk at Velocity New York 2015.  Check out the slides on Slideshare:


In the talk, we discuss the three main challenges of measuring the performance of SPAs, and how we’ve been able to build SPA performance monitoring into Boomerang.

The talk is also available on YouTube.

UserTiming in Practice

May 29th, 2015

UserTiming is a specification developed by the W3C Web Performance working group, with the goal of giving the developer a standardized interface to log timestamps (“marks”) and durations (“measures”).

UserTiming utilizes the PerformanceTimeline that we saw in ResourceTiming, but all of the UserTiming events are put there by the you the developer (or the third-party scripts you’ve included in the page).

UserTiming is currently a Recommendation, which means that browser vendors are encouraged to implement it. As of early 2015, 59% of the world-wide browser market-share support NavigationTiming.

How was it done before?

Prior to UserTiming, developers have been keeping track of performance metrics, such as logging timestamps and event durations by using simple JavaScript:

var start = new Date().getTime();
// do stuff
var now = new Date().getTime();
var duration = now - start;

What’s wrong with this?

Well, nothing really, but… we can do better.

First, as discussed previously, Date().getTime() is not reliable.

Second, by logging your performance metrics into the standard interface of UserTiming, browser developer tools and third-party analytics services will be able to read and understand your performance metrics.

Marks and Measures

Developers generally use two core ideas to profile their code. First, they keep track of timestamps for when events happen. They may log these timestamps (e.g. via Date().getTime()) into JavaScript variables to be used later.

Second, developers often keep track of durations of events. This is often done by taking the difference of two timestamps.

Timestamps and durations correspond to “marks” and “measures” in UserTiming terms. A mark is a timestamp, in DOMHighResTimeStamp format. A measure is a duration, the difference between two marks, also measured in milliseconds.

How to use

Creating a mark or measure is done via the window.performance interface:

partial interface Performance {
    void mark(DOMString markName);

    void clearMarks(optional  DOMString markName);

    void measure(DOMString measureName, optional DOMString startMark,
        optional DOMString endMark);

    void clearMeasures(optional DOMString measureName);

interface PerformanceEntry {
  readonly attribute DOMString name;
  readonly attribute DOMString entryType;
  readonly attribute DOMHighResTimeStamp startTime;
  readonly attribute DOMHighResTimeStamp duration;

interface PerformanceMark : PerformanceEntry { };

interface PerformanceMeasure : PerformanceEntry { };

A mark (PerformanceMark) is an example of a PerformanceEntry, with no additional attributes:

  • name is the mark’s name
  • entryType is "mark"
  • startTime is the time the mark was created
  • duration is always 0

A measure (PerformanceMeasure) is also an example of a PerformanceEntry, with no additional attributes:

  • name is the measure’s name
  • entryType is "measure"
  • startTime is the startTime of the start mark
  • duration is the difference between the startTime of the start and end mark

Example Usage

Let’s start by logging a couple marks (timestamps):

// mark
// -> {"name": "start", "entryType": "mark", "startTime": 1, "duration": 0}

// -> {"name": "end", "entryType": "mark", "startTime": 2, "duration": 0}

// -> {"name": "another", "entryType": "mark", "startTime": 3, "duration": 0}
// -> {"name": "another", "entryType": "mark", "startTime": 4, "duration": 0}
// -> {"name": "another", "entryType": "mark", "startTime": 5, "duration": 0}

Later, you may want to compare two marks (start vs. end) to create a measure (called diff), such as:

performance.measure("diff", "start", "end");
// -> {"name": "diff", "entryType": "measure", "startTime": 1, "duration": 1}

Note that measure() always calculates the difference by taking the latest timestamp that was seen for a mark. So if you did a measure against the another marks in the example above, it will take the timestamp of the third call to mark("another"):

performance.measure("diffOfAnother", "start", "another");
// -> {"name": "diffOfAnother", "entryType": "measure", "startTime": 1, "duration": 4}

There are many ways to create a measure:

  • If you call measure(name), the startTime is assumed to be window.performance.timing.navigationStart and the endTime is assumed to be now.
  • If you call measure(name, startMarkName), the startTime is assumed to be startTime of the given mark’s name and the endTime is assumed to be now.
  • If you call measure(name, startMarkName, endMarkName), the startTime is assumed to be startTime of the given start mark’s name and the endTime is assumed to be the startTime of the given end mark’s name.

Some examples of using measure():

// log the beginning of our task (assuming now is '1')
// -> {"name": "start", "entryType": "mark", "startTime": 1, "duration": 0}

// do work (assuming now is '2')
// -> {"name": "start2", "entryType": "mark", "startTime": 2, "duration": 0}

// measure from navigationStart to now (assuming now is '3')
performance.measure("time to get to this point");
// -> {"name": "time to get to this point", "entryType": "measure", "startTime": 0, "duration": 3}

// measure from "now" to the "start" mark (assuming now is '4')
performance.measure("time to do stuff", "start");
// -> {"name": "time to do stuff", "entryType": "measure", "startTime": 1, "duration": 3}

// measure from "start2" to the "start" mark
performance.measure("time from start to start2", "start", "start2");
// -> {"name": "time from start to start2", "entryType": "measure", "startTime": 1, "duration": 1}

Once a mark or measure has been created, you can query for all marks, all measures, or specific marks/measures via the PerformanceTimeline. Here’s a review of the PerformanceTimeline methods:

window.performance.getEntriesByName(name, type);
  • getEntries(): Gets all entries in the timeline
  • getEntriesByType(type): Gets all entries of the specified type (eg resource, mark, measure)
  • getEntriesByName(name, type): Gets all entries with the specified name (eg URL or mark name). type is optional, and will filter the list to that type.

Here’s an example of using the PerformanceTimeline to fetch a mark:

// performance.getEntriesByType("mark");

// performance.getEntriesByName("time from start to start2", "measure");
        "name":"time from start to start2"

Standard Mark Names

There are a couple of mark names that are suggested by the W3C specification to have special meanings:

  • mark_fully_loaded: The time when the page is considered fully loaded as marked by the developer in their application
  • mark_fully_visible: The time when the page is considered completely visible to an end-user as marked by the developer in their application
  • mark_above_the_fold: The time when all of the content in the visible viewport has been presented to the end-user as marked by the developer in their application
  • mark_time_to_user_action: The time of the first user interaction with the page during or after a navigation, such as scroll or click, as marked by the developer in their application

By using these standardized names, other third-party tools can pick up on your meanings and treat them specially (for example, by overlaying them on your waterfall).

Obviously, you can use these mark names for anything you want, and don’t have to stick by the recommended meanings.


So why would you use UserTiming over just Date().getTime()?

First, it uses the PerformanceTimeline, so marks and measures are in the PerformanceTimeline along with other events

Second, it uses DOMHighResTimestamp instead of Date so the timestamps have sub-millisecond resolution, and are monotonically non-decreasing (so aren’t affected by the client’s clock).

UserTiming is very efficient, as the native browser runtime should be able to do math quicker and store things more performantly than your JavaScript runtime can.

Developer Tools

UserTiming marks and measures are currently available in the Internet Explorer F12 Developer Tools. They are called “User marks” and are shown as upside-down red triangles below:

UserTiming in IE F12 Dev Tools

Chrome and Firefox do not yet show marks or measures in their Developer Tools.

If you call console.timeStamp("foo"), a marker will show up in Chrome and Firefox timelines. However, if you do both console.timeStamp() and performance.mark(), you will get two markers in Internet Explorer.

So for now, if you want a cross-browser compatible way of showing a single marker in dev tools, you would have to sniff the UA (which is generally not recommended). There is an open bug on Chromium to get UserTiming marks to show in the dev tools.

Use Cases

How could you use UserTiming? Here are some ideas:

  • Any place that you’re already logging timestamps or calculating durations could be switched to UserTiming
  • Easy way to add profiling events to your application
  • Note important scenario durations in your Performance Timeline
  • Measure important durations for analytics


If you’re adding UserTiming instrumentation to your page, you probably also want to consume it. One way is to grab everything, package it up, and send it back to your own server for analysis.

In my UserTiming Compression article, I go over a couple ways of how to do this. Versus just sending the UserTiming JSON, usertiming-compression.js can reduce the byte size down to just 10-15% of the original.


UserTiming is available in most modern browsers. According to 59% of world-wide browser market share supports ResourceTiming, as of May 2015. This includes Internet Explorer 10+, Firefox 38+, Chrome 25+, Opera 15+ and Android Browser 4.4+.

CanIUse - UserTiming

As usual, Safari (iOS and Mac) doesn’t yet include support for UserTiming.

However, if you want to use UserTiming for everything, there are polyfills available that work 100% reliably in all browsers.

I have one such polyfill, UserTiming.js, available on Github.

DIY / Open Source / Commercial

If you want to use UserTiming, you could easily compress and beacon the data to your back-end for processing.

WebPageTest sends UserTiming to Google Analytics, Boomerang and SOASTA mPulse:

WebPageTest UserTiming

SOASTA mPulse collects UserTiming information for any Custom Timers you specify (I work at SOASTA, on mPulse and Boomerang):



UserTiming is a great interface to log your performance metrics into a standardized interface. As more services and browsers support UserTiming, you will be able to see your data in more and more places.

That wraps up our talk about how to monitor and measure the performance of your web apps. Hope you enjoyed it.

Other articles in this series:

More resources:


  • 2015-12-01: Added Compressing section

ResourceTiming in Practice

May 28th, 2015

Last updated: January 2019

Table Of Contents

  1. Introduction
  2. How was it done before?
  3. How to use
    3.1 Interlude: PerformanceTimeline
    3.2 ResourceTiming
    3.3 Initiator Types
    3.4 What Resources are Included
    3.5 Crawling IFRAMEs
    3.6 Cached Resources
    3.7 304 Not Modified
    3.8 The ResourceTiming Buffer
    3.9 Timing-Allow-Origin
    3.10 Blocking Time
    3.11 Content Sizes
    3.12 Service Workers
    3.13 Compressing ResourceTiming Data
    3.14 PerformanceObserver
  4. Use Cases
    4.1 DIY and Open-Source
    4.2 Commercial Solutions
  5. Availability
  6. Tips
  7. Browser Bugs
  8. Conclusion
  9. Updates

1. Introduction

ResourceTiming is a specification developed by the W3C Web Performance working group, with the goal of exposing accurate performance metrics about all of the resources downloaded during the page load experience, such as images, CSS and JavaScript.

ResourceTiming builds on top of the concepts of NavigationTiming and provides many of the same measurements, such as the timings of each resource’s DNS, TCP, request and response phases, along with the final “loaded” timestamp.

ResourceTiming takes its inspiration from resource Waterfalls. If you’ve ever looked at the Networking tab in Internet Explorer, Chrome or Firefox developer tools, you’ve seen a Waterfall before. A Waterfall shows all of the resources fetched from the network in a timeline, so you can quickly visualize any issues. Here’s an example from the Chrome Developer Tools:

ResourceTiming inspiration

ResourceTiming (Level 1) is a Candidate Recommendation, which means it has been shipped in major browsers. ResourceTiming (Level 2) is a Working Draft and adds additional features like content sizes and new attributes. It is still a work-in-progress, but many browsers already support it.

As of January 2019, 91% of the world-wide browser market-share supports ResourceTiming.

How was it done before?

Prior to ResourceTiming, you could measure the time it took to download resources on your page by hooking into the associated element’s onload event, such as for Images.

Take this example code:

var start = new Date().getTime();
var image1 = new Image();

image1.onload = function() {
    var now = new Date().getTime();
    var latency = now - start;
    alert("End to end resource fetch: " + latency);
image1.src = '';

With the code above, the image is inserted into the DOM when the script runs, at which point it sets the start variable to the current time. The image’s onload event calculates how long it took for the resource to be fetched.

While this is one method for measuring the download time of an image, it’s not very practical.

First of all, it only measures the end-to-end download time, plus any overhead required for the browser to fire the onload callback. For images, this could also include the time it takes to parse and render the image. You cannot get a breakdown of DNS, TCP, SSL, request or response times with this method.

Another issue is with the use of Date.getTime(), which has some major drawbacks. See our discussion on DOMHighResTimeStamp in the NavigationTiming discussion for more details.

Most importantly, to use this method you have to construct your entire web app dynamically, at runtime. Dynamically adding all of the elements that would trigger resource fetches in <script> tags is not practical, nor performant. You would have to insert all <img>, <link rel="stylesheet">, and <script> tags to instrument everything. Doing this via JavaScript is not performant, and the browser cannot pre-fetch resources that would have otherwise been in the HTML.

Finally, it’s impossible to measure all resources that are fetched by the browser using this method. For example, it’s not possible to hook into stylesheets or fonts defined via @import or @font-face statements.

ResourceTiming addresses all of these problems.

How to use

ResourceTiming data is available via several methods on the window.performance interface:

window.performance.getEntriesByName(name, type);

Each of these functions returns a list of PerformanceEntrys. getEntries() will return a list of all entries in the PerformanceTimeline (see below), while if you use getEntriesByType("resource") or getEntriesByName("foo", "resource"), you can limit your query to just entries of the type PerformanceResourceTiming, which inherits from PerformanceEntry.

That may sound confusing, but when you look at the array of ResourceTiming objects, they’ll simply have a combination of the attributes below. Here’s the WebIDL (definition) of a PerformanceEntry:

interface PerformanceEntry {
    readonly attribute DOMString name;
    readonly attribute DOMString entryType;

    readonly attribute DOMHighResTimeStamp startTime;
    readonly attribute DOMHighResTimeStamp duration;

Each PerformanceResourceTiming is a PerformanceEntry, so has the above attributes, as well as the attributes below:

interface PerformanceResourceTiming : PerformanceEntry {
    readonly attribute DOMString           initiatorType;
    readonly attribute DOMString           nextHopProtocol;
    readonly attribute DOMHighResTimeStamp workerStart;
    readonly attribute DOMHighResTimeStamp redirectStart;
    readonly attribute DOMHighResTimeStamp redirectEnd;
    readonly attribute DOMHighResTimeStamp fetchStart;
    readonly attribute DOMHighResTimeStamp domainLookupStart;
    readonly attribute DOMHighResTimeStamp domainLookupEnd;
    readonly attribute DOMHighResTimeStamp connectStart;
    readonly attribute DOMHighResTimeStamp connectEnd;
    readonly attribute DOMHighResTimeStamp secureConnectionStart;
    readonly attribute DOMHighResTimeStamp requestStart;
    readonly attribute DOMHighResTimeStamp responseStart;
    readonly attribute DOMHighResTimeStamp responseEnd;
    readonly attribute unsigned long long  transferSize;
    readonly attribute unsigned long long  encodedBodySize;
    readonly attribute unsigned long long  decodedBodySize;
    serializer = {inherit, attribute};

Interlude: PerformanceTimeline

The PerformanceTimeline is a critical part of ResourceTiming, and one interface that you can use to fetch ResourceTiming data, as well as other performance information, such as UserTiming data. See also the section on the PerformanceObserver for another way of consuming ResourceTiming data.

The methods getEntries(), getEntriesByType() and getEntriesByName() that you saw above are the primary interfaces of the PerformanceTimeline. The idea is to expose all browser performance information via a standard interface.

All browsers that support ResourceTiming (or UserTiming) will also support the PerformanceTimeline.

Here are the primary methods:

  • getEntries(): Gets all entries in the timeline
  • getEntriesByType(type): Gets all entries of the specified type (eg resource, mark, measure)
  • getEntriesByName(name, type): Gets all entries with the specified name (eg URL or mark name). type is optional, and will filter the list to that type.

We’ll use the PerformanceTimeline to fetch ResourceTiming data (and UserTiming data).

Back to ResourceTiming

ResourceTiming takes its inspiration from the NavigationTiming timeline.

Here are the phases a single resource would go through during the fetch process:

ResourceTiming timeline

To fetch all of the resources on a page, you simply call one of the PerformanceTimeline methods:

var resources = window.performance.getEntriesByType("resource");

/* eg:
        name: "",

        entryType: "resource",

        startTime: 566.357000003336,
        duration: 4.275999992387369,

        initiatorType: "img",
        nextHopProtocol: "h2",

        workerStart: 300.0,
        redirectEnd: 0,
        redirectStart: 0,
        fetchStart: 566.357000003336,
        domainLookupStart: 566.357000003336,
        domainLookupEnd: 566.357000003336,
        connectStart: 566.357000003336,
        secureConnectionStart: 0,
        connectEnd: 566.357000003336,
        requestStart: 568.4959999925923,
        responseStart: 569.4220000004862,
        responseEnd: 570.6329999957234,

        transferSize: 1000,
        encodedBodySize: 1000,
        decodedBodySize: 1000,
    }, ...

Please note that all of the timestamps are DOMHighResTimeStamps, so they are relative to window.performance.timing.navigationStart or window.performance.timeOrigin. Thus a value of 500 means 500 milliseconds after the page load started.

Here is a description of all of the ResourceTiming attributes:

  • name is the fully-resolved URL of the attribute (relative URLs in your HTML will be expanded to include the full protocol, domain name and path)
  • entryType will always be "resource" for ResourceTiming entries
  • startTime is the time the resource started being fetched
  • duration is the overall time required to fetch the resource
  • initiatorType is the localName of the element that initiated the fetch of the resource (see details below)
  • nextHopProtocol: ALPN Protocol ID such as http/0.9 http/1.0 http/1.1 h2 hq spdy/3 (ResourceTiming Level 2)
  • workerStart is the time immediately before the active Service Worker received the fetch event, if a ServiceWorker is installed
  • redirectStart and redirectEnd encompass the time it took to fetch any previous resources that redirected to the final one listed. If either timestamp is 0, there were no redirects, or one of the redirects wasn’t from the same origin as this resource.
  • fetchStart is the time this specific resource started being fetched, not including redirects
  • domainLookupStart and domainLookupEnd are the timestamps for DNS lookups
  • connectStart and connectEnd are timestamps for the TCP connection
  • secureConnectionStart is the start timestamp of the SSL handshake, if any. If the connection was over HTTP, or if the browser doesn’t support this timestamp (eg. Internet Explorer), it will be 0.
  • requestStart is the timestamp that the browser started to request the resource from the remote server
  • responseStart and responseEnd are the timestamps for the start of the response and when it finished downloading
  • transferSize: Bytes transferred for the HTTP response header and content body (ResourceTiming Level 2)
  • decodedBodySize: Size of the body after removing any applied content-codings (ResourceTiming Level 2)
  • encodedBodySize: Size of the body after prior to removing any applied content-codings (ResourceTiming Level 2)

duration includes the time it took to fetch all redirected resources (if any) as well as the final resource. To track the overall time it took to fetch just the final resource, you may want to use (responseEndfetchStart).

ResourceTiming does not (yet) include attributes that expose the HTTP status code of the resource (for privacy concerns).

Initiator Types

initiatorType is the localName of the element that fetched the resource — in other words, the name of the associated HTML element.

The most common values seen for this attribute are:

  • img
  • link
  • script
  • css: url(), @import
  • xmlhttprequest
  • iframe (known as subdocument in some versions of IE)
  • body
  • input
  • frame
  • object
  • image
  • beacon
  • fetch
  • video
  • audio
  • source
  • track
  • embed
  • eventsource
  • navigation
  • other
  • use

It’s important to note the initiatorType is not a “Content Type”. It is the element that triggered the fetch, not the type of content fetched.

As an example, some of the above values can be confusing at first glance. For example, A .css file may have an initiatorType of "link" or "css" because it can either be fetched via a <link> tag or via an @import in a CSS file. While an initiatorType of "css" might actually be a foo.jpg image, because CSS fetched an image.

The iframe initiator type is for <IFRAME>s on the page, and the duration will be how long it took to load that frame’s HTML (e.g. how long it took for responseEnd of the HTML). It will not include the time it took for the <IFRAME> itself to fire its onload event, so resources fetched within the <IFRAME> will not be represented in an iframe‘s duration. (Example test case)

Here’s a list of common HTML elements and JavaScript APIs and what initiatorType they should map to:

  • <img src="...">: img
  • <img srcset="...">: img
  • <link rel="stylesheet" href="...">: link
  • <link rel="prefetch" href="...">: link
  • <link rel="preload" href="...">: link
  • <link rel="prerender" href="...">: link
  • <link rel="manfiest" href="...">: link
  • <script src="...">: script
  • CSS @font-face { src: url(...) }: css
  • CSS background: url(...): css
  • CSS @import url(...): css
  • CSS cursor: url(...): css
  • CSS list-style-image: url(...): css
  • <body background=''>: body
  • <input src=''>: input
  • xmlhttprequest
  • <iframe src="...">: iframe
  • <frame src="...">: frame
  • <object>: object
  • <svg><image xlink:href="...">: image
  • <svg><use>: use
  • navigator.sendBeacon(...): beacon
  • fetch(...): fetch
  • <video src="...">: video
  • <video poster="...">: video
  • <video><source src="..."></video>: source
  • <audio src="...">: audio
  • <audio><source src="..."></audio>: source
  • <picture><source srcset="..."></picture>: source
  • <picture><img src="..."></picture>: img
  • <picture><img srcsec="..."></picture>: img
  • <track src="...">: track
  • <embed src="...">: embed
  • favicon.ico: link
  • EventSource: eventsource

Not all browsers correctly report the initiatorType for the above resources. See this web-platform-tests test case for details.

What Resources are Included

All of the resources that your browser fetches to construct the page should be listed in the ResourceTiming data. This includes, but is not limited to images, scripts, css, fonts, videos, IFRAMEs and XHRs.

Some browsers (eg. Internet Explorer) may include other non-fetched resources, such as about:blank and javascript: URLs in the ResourceTiming data. This is likely a bug and may be fixed in upcoming versions, but you may want to filter out non-http: and https: protocols.

Additionally, some browser extensions may trigger downloads and thus you may see some of those downloads in your ResourceTiming data as well.

Not all resources will be fetched successfully. There might have been a networking error, due to a DNS, TCP or SSL/TLS negotiation failure. Or, the server might return a 4xx or 5xx response. How this information is surfaced in ResourceTiming depends on the browser:

  • DNS failure (cross-origin)
    • Chrome: No ResourceTiming event
    • Internet Explorer: domainLookupStart through responseStart are 0. responseEnd and duration are non-zero.
    • Edge: domainLookupStart through responseStart are 0. responseEnd and duration are non-zero.
    • Firefox: domainLookupStart through responseStart are 0. duration is 0 in some cases. responseEnd is non-zero.
    • Safari: No ResourceTiming event
  • TCP failure (cross-origin)
    • Chrome: No ResourceTiming event
    • Internet Explorer: domainLookupStart through responseStart are 0. responseEnd and duration are non-zero.
    • Edge: domainLookupStart through responseStart are 0. responseEnd and duration are non-zero.
    • Firefox: domainLookupStart through responseStart and duration are 0. responseEnd is non-zero.
    • Safari: No ResourceTiming event
  • SSL failure (cross-origin)
    • Chrome: No ResourceTiming event
    • Internet Explorer: domainLookupStart through responseStart are 0. responseEnd and duration are non-zero.
    • Edge: domainLookupStart through responseStart are 0. responseEnd and duration are non-zero.
    • Firefox: domainLookupStart through responseStart and duration are 0. responseEnd is non-zero.
    • Safari: No ResourceTiming event
  • 4xx/5xx response (same-origin)
    • Chrome: No ResourceTiming event
    • Internet Explorer: All timestamps are non-zero.
    • Edge: All timestamps are non-zero.
    • Firefox: All timestamps are non-zero.
    • Safari: No ResourceTiming event.
  • 4xx/5xx response (cross-origin)
    • Chrome: No ResourceTiming event.
    • Internet Explorer: startTime, fetchStart, responseEnd and duration are non-zero.
    • Edge: startTime, fetchStart, responseEnd and duration are non-zero.
    • Firefox: startTime, fetchStart, responseEnd and duration are non-zero.
    • Safari: No ResourceTiming event.

The working group is attempting to get these behaviors more consistent across browsers. You can read this post for further details as well as inconsistencies found. See the browser bugs section for relevant bugs.

Note that the root page (your HTML) is not included in ResourceTiming. You can get all of that data from NavigationTiming.

An additional set of resources that may also be missing from ResourceTiming are resources fetched by cross-origin stylesheets fetched with no-cors policy.

There are a few additional reasons why a resource might not be in the ResourceTiming data. See the Crawling IFRAMEs and Timing-Allow-Origin sections for more details.

Crawling IFRAMEs

There are two important caveats when working with ResourceTiming data:

  1. Each <IFRAME> on the page will only report on its own resources, so you must look at every frame’s performance.getEntriesByType("resource")
  2. You cannot access frame.performance.getEntriesByType() in a cross-origin frame

If you want to capture all of the resources that were fetched for a page load, you need to crawl all of the frames on the page (and sub-frames, etc), and join their entries to the main window’s.

This gist shows a naive way of crawling all frames. For a version that deals with all of the complexities of the crawl, such as adjusting resources in each frame to the correct startTime, you should check out Boomerang’s restiming.js plugin.

However, even if you attempt to crawl all frames on the page, many pages include third-party scripts, libraries, ads, and other content that loads within cross-origin frames. We have no way of accessing the ResourceTiming data from these frames. Over 30% of resources in the Alexa Top 1000 are completely invisible to ResourceTiming because they’re loaded in a cross-origin frame.

Please see my in-depth post on ResourceTiming Visibility for details on how this might affect you, and suggested workarounds.

Cached Resources

Cached resources will show up in ResourceTiming right along side resources that were fetched from the network.

For browsers that do not support ResourceTiming Level 2 with the content size attributes, there’s no direct indicator for the resource that it was served from the cache. In practice, resources with a very short duration (say under 30 milliseconds) are likely to have been served from the browser’s cache. They might take a few milliseconds due to disk latencies.

Browsers that support ResourceTiming Level 2 expose the content size attributes, which gives us a lot more information about the cache state of each resource. We can look at transferSize to determine cache hits.

Here is example code to determine cache hit status:

function isCacheHit() {
  // if we transferred bytes, it must not be a cache hit
  // (will return false for 304 Not Modified)
  if (transferSize > 0) return false;

  // if the body size is non-zero, it must mean this is a
  // ResourceTiming2 browser, this was same-origin or TAO,
  // and transferSize was 0, so it was in the cache
  if (decodedBodySize > 0) return true;

  // fall back to duration checking (non-RT2 or cross-origin)
  return duration < 30;

This algorithm isn’t perfect, but probably covers 99% of cases.

Note that conditional validations that return a 304 Not Modifed would be considered a cache miss with the above algorithm.

304 Not Modified

Conditionally fetched resources (with an If-Modified-Since or Etag header) might return a 304 Not Modified response.

In this case, the tranferSize might be small because it just reflects the 304 Not Modified response and no content body. transferSize might be less than the encodedBodySize in this case.

encodedBodySize and decodedBodySize should be the body size of the previously-cached resource.

(there is a Chrome 65 browser bug that might result in the encodedBodySize and decodedBodySize being 0)

Here is example code to detect 304s:

function is304() {
  if (encodedBodySize > 0 &&
      tranferSize > 0 &&
      tranferSize < encodedBodySize) {
    return true;

  // unknown
  return null;

The ResourceTiming Buffer

There is a ResourceTiming buffer (per document / IFRAME) that stops filling after its limit is reached. By default, all modern browsers (except Internet Explorer / Edge) currently set this limit to 150 entries (per frame). Internet Explorer 10+ and Edge default to 500 entries (per frame).

The reasoning behind limiting the number of entries is to ensure that, for the vast majority of websites that are not consuming ResourceTiming entries, the browser’s memory isn’t consumed indefinitely holding on to a lot of this information. In addition, for sites that periodically fetch new resources (such as XHR polling), we would’t want the ResourceTiming buffer to grow unbound.

Thus, if you will be consuming ResourceTiming data, you need to have awareness of the buffer. If your site only downloads a handful of resources for each page load (< 100), and does nothing afterwards, you probably won’t hit the limit.

However, if your site downloads over a hundred resources, or you want to be able to monitor for resources fetched on an ongoing basis, you can do one of three things.

First, you can listen for the onresourcetimingbufferfull event which gets fired on the document when the buffer is full. You can then use setResourceTimingBufferSize(n) or clearResourceTimings() to resize or clear the buffer.

As an example, to keep the buffer size at 150 yet continue tracking resources after the first 150 resources were added, you could do something like this;

if ("performance" in window) {
  function onBufferFull() {
    var latestEntries = performance.getEntriesByType("resource");

    // analyze or beacon latestEntries, etc

  performance.onresourcetimingbufferfull = performance.onwebkitresourcetimingbufferfull = onBufferFull;

Note onresourcetimingbufferfull is not currently supported in Internet Explorer (10, 11 or Edge).

If your site is on the verge of 150 resources, and you don’t want to manage the buffer, you could also just safely increase the buffer size to something reasonable in your HTML header:

if ("performance" in window 
    && window.performance 
    && window.performance.setResourceTimingBufferSize) {

(you should do this for any <iframe> that might load more than 150 resources too)

Don’t just setResourceTimingBufferSize(99999999) as this could grow your visitors’s browser’s memory unnecessarily.

Finally, you can also use a PerformanceObserver to manage your own “buffer” of ResourceTiming data.

Note: Some browsers are starting up update the default limit of 150 resources to 250 resources instead.


A cross-origin resource is any resource that doesn’t originate from the same domain as the page. For example, if your visitor is on and you’ve fetched resources from or, those resources will both be considered cross-origin.

By default, cross-origin resources only expose timestamps for the following attributes:

  • startTime (will equal fetchStart)
  • fetchStart
  • responseEnd
  • duration

This is to protect your privacy (so an attacker can’t load random URLs to see where you’ve been).

This means that all of the following attributes will be 0 for cross-origin resources:

  • redirectStart
  • redirectEnd
  • domainLookupStart
  • domainLookupEnd
  • connectStart
  • connectEnd
  • secureConnectionStart
  • requestStart
  • responseStart

In addition, all size information will be 0 for cross-origin resources:

  • transferSize
  • encodedBodySize
  • decodedBodySize

In addition, cross-origin resources that redirected will have a startTime that only reflects the final resource — startTime will equal fetchStart instead of redirectStart. This means the time of any redirect(s) will be hidden from ResourceTiming.

Luckily, if you control the domains you’re fetching other resources from, you can overwrite this default precaution by sending a Timing-Allow-Origin HTTP response header:

Timing-Allow-Origin = "Timing-Allow-Origin" ":" origin-list-or-null | "*"

In practice, most people that send the Timing-Allow-Origin HTTP header just send a wildcard origin:

Timing-Allow-Origin: *

So if you’re serving any of your content from another domain name, i.e. from a CDN, it is strongly recommended that you set the Timing-Allow-Origin header for those responses.

Thankfully, third-party libraries for widgets, ads, analytics, etc are starting to set the header on their content. Only about 13% currently do, but this is growing (according to the HTTP Archive). Notably, Google, Facebook, Disqus, and mPulse send this header for their scripts.

Blocking Time

Browsers will only open a limited number of connections to each unique origin (protocol/server name/port) when downloading resources.

If there are more resources than the # of connections, the later resources will be “blocking”, waiting for their turn to download.

Blocking time is generally seen as “missing periods” (non-zero durations) that occur between connectEnd and requestStart (when waiting on a Keep-Alive TCP connection to reuse), or between fetchStart and domainLookupStart (when waiting on things like the browser’s cache).

The duration attribute includes Blocking time. So in general, you may not want to use duration if you’re only interested in actual network timings.

Unfortunately, duration, startTime and responseEnd are the only attributes you get with cross-origin resources, so you can’t easily subtract out Blocking time from cross-origin resources.

To calculate Blocking time, you would do something like this:

var blockingTime = 0;
if (res.connectEnd && res.connectEnd === res.fetchStart) {
    blockingTime = res.requestStart - res.connectEnd;
} else if (res.domainLookupStart) {
    blockingTime = res.domainLookupStart - res.fetchStart;

Content Sizes

Beginning with ResourceTiming 2, content sizes are included for all same-origin or Timing-Allow-Origin resources:

  • transferSize: Bytes transferred for HTTP response header and content body
  • decodedBodySize: Size of the body after removing any applied content-codings
  • encodedBodySize: Size of the body after prior to removing any applied content-codings

Some notes:

  • If transferSize is 0, and timestamps like responseStart are filled in, the resource was served from the cache
  • If transferSize is 0, but timestamps like responseStart are also 0, the resource was cross-origin, so you should look at the cached state algorithm to determine its cache state
  • transferSize might be less than encodedBodySize in cases where a conditional validation occurred (e.g. 304 Not Modified). In this case, transferSize would be the size of the 304 headers, while encodedBodySize would be the size of the cached response body from the previous request.
  • If encodedBodySize and decodedBodySize are non-0 and differ, the content was compressed (e.g. gzip or Brotli)
  • encodedBodySize might be 0 in some cases (e.g. HTTP 204 (No Content) or 3XX responses)


If you are using ServiceWorkers in your app, you can get information about the time the ServiceWorker activated (fetch was fired) for each resource via the workerStart attribute.

The difference between workerStart and fetchStart is the processing time of the ServiceWorker:

var workerProcessingTime = 0;
if (res.workerStart && res.fetchStart) {
    workerProcessingTime = res.fetchStart - res.workerStart;

Compressing ResourceTiming Data

The HTTP Archive tells us there are about 100 HTTP resources on average, per page, with an average URL length of 85 bytes.

On average, each resource is ~ 500 bytes when JSON.stringify()‘d.

That means you could expect around 45 KB of ResourceTiming data per page load on the “average” site.

If you’re considering beaconing ResourceTiming data back to your own servers for analysis, you may want to consider compressing it first.

There’s a couple things you can do to compress the data, and I’ve written about these methods already. I’ve shared an open-source script that can compress ResourceTiming data that looks like this:


To something much smaller, like this (which contains 3 resources):

    "http://": {
        "": {
            "js/foo.js": "370,1z,1c",
            "css/foo.css": "48c,5k,14"
        "": "312,34,56"

Overall, we can compresses ResourceTiming data down to about 15% of its original size.

Example code to do this compression is available on github.


Instead of using performance.getEntriesByType("resource") to fetch all of the current resources from the ResourceTiming buffer, you could instead use a PerformanceObserver to get notified about all fetches.

Example usage:

if (typeof window.PerformanceObserver === "function") {
  var resourceTimings = [];

  var observer = new PerformanceObserver(function(entries) {
    Array.prototype.push.apply(resourceTimings, entries);

  observer.observe({entryTypes: ['resource']});

The benefits of using a PerformanceObserver are:

  • You have stricter control over buffering old entries (if you want to buffer at all)
  • If there are multiple scripts or libraries on the page trying to manage the ResourceTiming buffer (by setting its size or clearing it), the PerformanceObserver won’t be affected by it.

However, there are two major challenges with using a PerformanceObserver for ResourceTiming:

  • You’ll need to register the PerformanceObserver before everything else on the page, ideally via an inline-<script> tag in your page’s <head>. Otherwise, requests that fire before the PerformanceObserver initializes won’t be delivered to the callback. There is some work being done to add a buffered: true option, but it is not yet implemented in browsers. In the meantime, you could call observer.observe() and then immediately call performance.getEntriesByType("resource") to get the current buffer.
  • Since each frame on the page maintains its own buffer of ResoruceTiming entries, and its own PerformanceObserver list, you will need to register a PerformanceObserver in all frames on the page, including child frames, grandchild frames, etc. This results in a race condition, where you need to either monitor the page for all <iframe>s being created an immediately hook a PerformanceObserver in their window, or, you’ll have to crawl all of the frames later and get performance.getEntriesByType("resource") anyways. There are some thoughts about adding a bubbles: true flag to make this easier.

Use Cases

Now that ResourceTiming data is available in the browser in an accurate and reliable manner, there are a lot of things you can do with the information. Here are some ideas:

  • Send all ResourceTimings to your backend analytics
  • Raise an analytics event if any resource takes over X seconds to download (and trend this data)
  • Watch specific resources (eg third-party ads or analytics) and complain if they are slow
  • Monitor the overall health of your DNS infrastructure by beaconing DNS resolve time per-domain
  • Look for production resource errors (eg 4xx/5xx) in browsers that add errors to the buffer it (IE/Firefox)
  • Use ResourceTiming to determine your site’s “visual complete” metric by looking at timings of all above-the-fold images

The possibilities are nearly endless. Please leave a comment with how you’re using ResourceTiming data.

DIY and Open-Source

Here are several interesting DIY / open-source solutions that utilize ResourceTiming data:

Andy Davies’ Waterfall.js shows a waterfall of any page’s resources via a bookmarklet:

Andy Davies' Waterfall.js

Mark Zeman’s Heatmap bookmarklet / Chrome extension gives a heatmap of when images loaded on your page:

Mark Zeman's Heatmap bookmarklet and extension

Nurun’s Performance Bookmarklet breaks down your resources and creates a waterfall and some interesting charts:

Nurun's Performance Bookmarklet

Boomerang also captures ResourceTiming data and beacons it back to your backend analytics server:

Commercial Solutions

If you don’t want to build or manage a DIY / Open-Source solution to gather ResourceTiming data, there are many great commercial services available.

Disclaimer: I work at Akamai, on mPulse and Boomerang

Akamai mPulse captures 100% of your site’s traffic and gives you Waterfalls for each visit:

Akamai mPulse Resource Timing

New Relic Browser:

New Relic Browser

App Dynamics Web EUEM:

App Dynamics Web EUEM

Dynatrace UEM

Dynatrace UEM


ResourceTiming is available in most modern browsers. According to, 91% of world-wide browser market share supports ResourceTiming (as of January 2019). This includes Internet Explorer 10+, Edge, Firefox 36+, Chrome 25+, Opera 15+, Safari 11 and Android Browser 4.4+.

CanIUse - ResourceTiming - April 2018

There are no polyfills available for ResourceTiming, as the data is just simply not available if the browser doesn’t expose it.


Here are some additional (and re-iterated) tips for using ResourceTiming data:

  • For many sites, most of your content will not be same-origin, so ensure all of your CDNs and third-party libraries send the Timing-Allow-Origin HTTP response header.
  • Each IFRAME will have its own ResourceTiming data, and those resources won’t be included in the parent FRAME/document. You’ll need to traverse the document frames to get all resources. See
  • Resources loaded from cross-origin frames will not be visible
  • ResourceTiming data does not include the HTTP response code for privacy concerns.
  • If you’re going to be managing the ResourceTiming buffer, make sure no other scripts are managing it as well (eg third-party analytics scripts). Otherwise, you may have two listeners for onresourcetimingbufferfull stomping on each other.
  • The duration attribute includes Blocking time (when a resource is blocked behind other resources on the same socket).
  • about:blank and javascript: URLs may be in the ResourceTiming data for some browsers, and you may want to filter them out.
  • Browser extensions may show up in ResourceTiming data, if they initiate downloads. We’ve seen Skype and other extensions show up.

ResourceTiming Browser Bugs

Browsers aren’t perfect, and unfortunately there some outstanding browser bugs around ResourceTiming data. Here are some of the known ones (some of which may have been fixed the time you read this):

Sidebar – it’s great that browser vendors are tracking these issues publicly.


ResourceTiming exposes accurate performance metrics for all of the resources fetched on your page. You can use this data for a variety of scenarios, from investigating the performance of your third-party libraries to taking specific actions when resources aren’t performing according to your performance goals.

Next up: Using UserTiming data to expose custom metrics for your JavaScript apps in a standardized way.

Other articles in this series:

More resources:


  • 2016-01-03: Updated Firefox’s 404 and DNS behavior via Aaron Peters
  • 2018-04:
    • Updated the PerformanceResourceTiming interface for ResourceTiming (Level 2) and descriptions of ResourceTiming 2 attributes
    • Updated an incorrect statement about initiatorType='iframe'. Previously this document stated that the duration would include the time it took for the <IFRAME> to download static embedded resources. This is not correct. duration only includes the time it takes to download the <IFRAME> HTML bytes (through responseEnd of the HTML, so it does not include the onload duration of the <IFRAME>).
    • Updated list of attributes that are 0 for cross-origin resources (to include size attributes)
    • Added a note with examples on how to crawl frames
    • Added a note on ResourceTiming Visibility and how cross-origin frames affect ResourceTiming
    • Removed some notes about older versions of Chrome
    • Added section on Content Sizes
    • Updated the Cached Resources section to add an algorithm for ResourceTiming2 data
    • Updated initiatorType list as well as added a map of common elements to what initiatorType they would be
    • Added a note about ResourceTiming missing resources fetched by cross-origin stylesheets fetched with no-cors policy
    • Added note about startTime missing when there are redirects with no Timing-Allow-Origin
    • Added a section for 304 Not Modified responses
    • Added a section on PerformanceObserver
    • Updated the ServiceWorker section
    • Added a section on Browser Bugs
    • Updated market share
  • 2018-06:
    • Updated note that IE/Edge have a default buffer size of 500 entries
    • Added note about change to increase recommended buffer size of 150 to 250
  • 2019-01