Archive

Author Archive

AMP: Does it Really Make Your Site Faster?

October 11th, 2016

Nigel Heron and I gave a talk about Accelerated Mobile Pages (AMP) at Velocity New York 2016.  We have slides available on Slideshare:

amp-does-it-really-make-your-site-faster

In this talk, we dig into AMP to determine whether or not it gives your visitors a better page load experience.  We cover:

  • What is AMP?
  • Why should AMP pages be faster?
  • How do we measure the real user experience for AMP pages?
  • A demo of how to use AMP analytics
  • Real-world performance data from AMP visitors
  • Real-world engagement / conversion data from AMP visitors

The talk is also available on YouTube.

Measuring Continuity

June 23rd, 2016

Your site’s page load performance is important (and there are tools like Boomerang to measure it), but how good is your visitor’s experience as they continue to interact with your site after it has loaded?

At Velocity 2016, Philip Tellis and I talked about how you can measure their experience (and emotion!) in Measuring Continuity:

Measuring Continuity

We cover how to capture a variety of user experience metrics such as:

  • Browser developer tools’ Timeline metrics such as FPS, CPU, network and heap usage
  • Interactions like user input, page visibility and device orientation
  • Complexity metrics including document size, node counts and mutations
  • User experience metrics like jank, responsiveness and reliability
  • Tracking emotion with rage clicks, dead clicks and missed clicks

Code samples for the talk are also available, and you can watch it on on YouTube.

Particle Photon/Electron Remote Temperature and Humidity Logger

February 21st, 2016

case

After how much fun I had building a cheap and simple Spark Core Water Sensor for my sump-pump, I’m now using a Photon (which is half of the price of the Spark Core) for remote temperature and humidity logging for my kegerator (keezer).  For just $24, you can have a remote sensor logging data to Adafruit.io, ThingSpeak, Amazon DynamoDB or any HTTP endpoint.

You can see how the whole project on Github.

Compressing UserTiming

December 10th, 2015

UserTiming is a modern browser performance API that gives developers the ability the mark important events (timestamps) and measure durations (timestamp deltas) in their web apps. For an in-depth overview of how UserTiming works, you can see my article UserTiming in Practice or read Steve Souders’ excellent post with several examples for how to use UserTiming to measure your app.

UserTiming is very simple to use. Let’s do a brief review. If you want to mark an important event, just call window.performance.mark(markName):

// log the beginning of our task
performance.mark("start");

You can call .mark() as many times as you want, with whatever markName you want. You can repeat the same markName as well.

The data is stored in the PerformanceTimeline. You query the PerformanceTimeline via methods like performance.getEntriesByName(markName):

// get the data back
var entry = performance.getEntriesByName("start");
// -> {"name": "start", "entryType": "mark", "startTime": 1, "duration": 0}

Pretty simple right? Again, see Steve’s article for some great use cases.

So let’s imagine you’re sold on using UserTiming. You start instrumenting you website, placing marks and measures throughout the life-cycle of your app. Now what?

The data isn’t useful unless you’re looking at it. On your own machine, you can query the PerformanceTimeline and see marks and measures in the browser developer tools. There are also third party services that give you a view of your UserTiming data.

What if you want to gather the data yourself? What if you’re interested in trending different marks or measures in your own analytics tools?

The easy approach is to simply fetch all of the marks and measures via performance.getEntriesByType(), stringify the JSON, and XHR it back to your stats engine.

But how big is that data?

Let’s look at some example data — this was captured from a website I was browsing:

{"duration":0,"entryType":"mark","name":"mark_perceived_load","startTime":1675.636999996641},
{"duration":0,"entryType":"mark","name":"mark_before_flex_bottom","startTime":1772.8529999985767},
{"duration":0,"entryType":"mark","name":"mark_after_flex_bottom","startTime":1986.944999996922},
{"duration":0,"entryType":"mark","name":"mark_js_load","startTime":2079.4459999997343},
{"duration":0,"entryType":"mark","name":"mark_before_deferred_js","startTime":2152.8769999968063},
{"duration":0,"entryType":"mark","name":"mark_after_deferred_js","startTime":2181.611999996676},
{"duration":0,"entryType":"mark","name":"mark_site_init","startTime":2289.4089999972493}]

That’s 657 bytes for just 7 marks. What if you want to log dozens, hundreds, or even thousands of important events on your page? What if you have a Single Page App, where the user can generate many events over the lifetime of their session?

Clearly, we can do better. The signal : noise ratio of stringified JSON isn’t that good. As a performance-conscientious developer, we should strive to minimize our visitor’s upstream bandwidth usage when sending our analytics packets.

Let’s see what we can do.

The Goal

Our goal is to reduce the size of an array of marks and measures down to a data structure that’s as small as possible so that we’re only left with a minimal payload that can be quickly beacon’d to a server for aggregate analysis.

For a similar domain-specific compression technique for ResourceTiming data, please see my post on Compressing ResourceTiming. The techniques we will discuss for UserTiming will build on some of the same things we can do for ResourceTiming data.

An additional goal is that we’re going to stick with techniques where the resulting compressed data doesn’t expand from URL encoding if used in a query-string parameter. This makes it easy to just tack on the data to an existing analytics or Real-User-Monitoring (RUM) beacon.

The Approach

There are two main areas of our data-structure that we can compress. Let’s take a single measure as an example:

{  
    "name":      "measureName",
    "entryType": "measure",
    "startTime": 2289.4089999972493,
    "duration":  100.12314141 
}

What data is important here? Each mark and measure has 4 attributes:

  1. Its name
  2. Whether it’s a mark or a measure
  3. Its start time
  4. Its duration (for marks, this is 0)

I’m going to suggest we can break these down into two main areas: The object and its payload. The object is simply the mark or measure’s name. The payload is its start time, and if its a measure, it’s duration. A duration implies that the object is a measure, so we don’t need to track that attribute independently.

Essentially, we can break up our UserTiming data into a key-value pair. Grouping by the mark or measure name let’s us play some interesting games, so the name will be the key. The value (payload) will be the list of start times and durations for each mark or measure name.

First, we’ll compress the payload (all of the timestamps and durations). Then, we can compress the list of objects.

So, let’s start out by compressing the timestamps!

Compressing the Timestamps

The first thing we want to compress for each mark or measure are its timestamps.

To begin with, startTime and duration are in millisecond resolution, with microseconds in the fraction. Most people probably don’t need microsecond resolution, and it adds a ton of byte size to the payload. A startTime of 2289.4089999972493 can probably be compressed down to just 2,289 milliseconds without sacrificing much accuracy.

So let’s say we have 3 marks to begin with:

{"duration":0,"entryType":"mark","name":"mark1","startTime":100},
{"duration":0,"entryType":"mark","name":"mark1","startTime":150},
{"duration":0,"entryType":"mark","name":"mark1","startTime":500}

Grouping by mark name, we can reduce this structure to an array of start times for each mark:

{ "mark1": [100, 150, 500] }

One of the truths of UserTiming is that when you fetch the entries via performance.getEntries(), they are in sorted order.

Let’s use this to our advantage, by offsetting each timestamp by the one in front of it. For example, the 150 timestamp is only 50ms away from the 100 timestamp before it, so its value can be instead set to 50. 500 is 350ms away from 150, so it gets set to 350. We end up with smaller integers this way, which will make compression easier later:

{ "mark1": [100, 50, 350] }

How can we compress the numbers further? Remember, one goal is to make the resulting data transmit easier on a URL (query string), so we mostly want to use the ASCII alpha-numeric set of characters.

One really easy way of reducing the number of bytes taken by a number in JavaScript is using Base-36 encoding. In other words, 0=0, 10=a, 35=z. Even better, JavaScript has this built-in to Integer.toString(36):

(35).toString(36)          == "z" (saves 1 character)
(99999999999).toString(36) == "19xtf1tr" (saves 3 characters)

Once we Base-36 encode all of our offset timestamps, we’re left with a smaller number of characters:

{ "mark1": ["2s", "1e", "9q"] }

Now that we have these timestamps offsets in Base-36, we can combine (join) them into a single string so they’re easily transmitted. We should avoid using the comma character (,), as it is one of the reserved characters of the URI spec (RFC 3986), so it will be escaped to %2C.

The list of non-URI-encoded characters is pretty small:

[0-9a-zA-Z] $ - _ . + ! * ' ( )

The period (.) looks a lot like a comma, so let’s go with that. Applying a simple Array.join("."), we get:

{ "mark1": "2s.1e.9q" }

So we’re really starting to reduce the byte size of these timestamps. But wait, there’s more we can do!

Let’s say we have some timestamps that came in at a semi-regular interval:

{"duration":0,"entryType":"mark","name":"mark1","startTime":100},
{"duration":0,"entryType":"mark","name":"mark1","startTime":200},
{"duration":0,"entryType":"mark","name":"mark1","startTime":300}

Compressed down, we get:

{ "mark1": "2s.2s.2s" }

Why should we repeat ourselves?

Let’s use one of the other non-URI-encoded characters, the asterisk (*), to note when a timestamp offset repeats itself:

  • A single * means it repeated twice
  • *[n] means it repeated n times.

So the above timestamps can be compressed further to:

{ "mark1": "2s*3" }

Obviously, this compression depends on the application’s characteristics, but periodic marks can be seen in the wild.

Durations

What about measures? Measures have the additional data component of a duration. For marks these are always 0 (you’re just logging a point in time), but durations are another millisecond attribute.

We can adapt our previous string to include durations, if available. We can even mix marks and measures of the same name and not get confused later.

Let’s use this data set as an example. One mark and two measures (sharing the same name):

{"duration":0,"entryType":"mark","name":"foo","startTime":100},
{"duration":100,"entryType":"measure","name":"foo","startTime":150},
{"duration":200,"entryType":"measure","name":"foo","startTime":500}

Instead of an array of Base36-encoded offset timestamps, we need to include a duration, if available. Picking another non-URI-encoded character, the under-bar (_), we can easily “tack” this information on to the end of each startTime.

For example, with a startTime of 150 (1e in Base-36) and a duration of 100 (2s in Base-36), we get a simple string of 1e_2s.

Combining the above marks and measures, we get:

{ "foo": "2s.1e_2s.9q_5k" }

Later, when we’re decoding this, we haven’t lost track of the fact that there are both marks and measures intermixed here, since only measures have durations.

Going back to our original example:

[{"duration":0,"entryType":"mark","name":"mark1","startTime":100},
{"duration":0,"entryType":"mark","name":"mark1","startTime":150},
{"duration":0,"entryType":"mark","name":"mark1","startTime":500}]

Let’s compare that JSON string to how we’ve compressed it (still in JSON form, which isn’t very URI-friendly):

{"mark1":"2s.1e.9q"}

198 bytes originally versus 21 bytes with just the above techniques, or about 10% of the original size.

Not bad so far.

Compressing the Array

Most sites won’t have just a single mark or measure name that they want to transmit. Most sites using UserTiming will have many different mark/measure names and values.

We’ve compressed the actual timestamps to a pretty small (URI-friendly) value, but what happens when we need to transmit an array of different marks/measures and their respective timestamps?

Let’s pretend there are 3 marks and 3 measures on the page, each with one timestamp. After applying timestamp compression, we’re left with:

{
    "mark1": "2s",
    "mark2": "5k",
    "mark3": "8c",
    "measure1": "2s_2s",
    "measure2": "5k_5k",
    "measure3": "8c_8c"
}

There are several ways we can compress this data to a format suitable for URL transmission. Let’s explore.

Using an Array

Remember, JSON is not URI friendly, mostly due to curly braces ({ }), quotes (") and colons (:) having to be escaped.

Even in a minified JSON form:

{"mark1":"2s","mark2":"5k","mark3":"8c","measure1
":"2s_2s","measure2":"5k_5k","measure3":"8c_8c"}
(98 bytes)

This is what it looks like after URI encoding:

%7B%22mark1%22%3A%222s%22%2C%22mark2%22%3A%225k%2
2%2C%22mark3%22%3A%228c%22%2C%22measure1%22%3A%22
2s_2s%22%2C%22measure2%22%3A%225k_5k%22%2C%22meas
ure3%22%3A%228c_8c%22%7D
(174 bytes)

Gah! That’s almost 77% overhead.

Since we have a list of known keys (names) and values, we could instead change this object into an “array” where we’re not using { } " : characters to delimit things.

Let’s use another URI-friendly character, the tilde (~), to separate each. Here’s what the format could look like:

[name1]~[timestamp1]~[name2]~[timestamp2]~[...]

Using our data:

mark1~2s~mark2~5k~mark3~8c~measure1~2s_2s~measure
2~5k_5k~measure3~8c_8c~
(73 bytes)

Note that this depends on your names not including a tilde, or, you can pre-escape tildes in names to %7E.

Using an Optimized Trie

That’s one way of compressing the data. In some cases, we can do better, especially if your names look similar.

One great technique we used in compressing ResourceTiming data is an optimized Trie. Essentially, you can compress strings anytime one is a prefix of another.

In our example above, mark1, mark2 and mark3 are perfect candidates, since they all have a stem of "mark". In optimized Trie form, our above data would look something closer to:

{
    "mark": {
        "1": "2s",
        "2": "5k",
        "3": "8c"
    },
    "measure": {
        "1": "2s_2s",
        "2": "5k_5k",
        "3": "8c_8c"
    }
}

Minified, this is 13% smaller than the original non-Trie data:

{"mark":{"1":"2s","2":"5k","3":"8c"},"measure":{"
1":"2s_2s","2":"5k_5k","3":"8c_8c"}}
(86 bytes)

However, this is not as easily compressible into a tilde-separated array, since it’s no longer a flat data structure.

There’s actually a great way to compress this JSON data for URL-transmission, called JSURL. Basically, the JSURL replaces all non-URI-friendly characters with a better URI-friendly representation. Here’s what the above JSON looks like regular URI-encoded:

%7B%22mark%22%3A%7B%221%22%3A%222s%22%2C%222%22%3
A%225k%22%2C%223%22%3A%228c%22%7D%2C%22measure%22
%3A%7B%22%0A1%22%3A%222s_2s%22%2C%222%22%3A%225k_
5k%22%2C%223%22%3A%228c_8c%22%7D%7D
(185 bytes)

Versus JSURL encoded:

~(m~(ark~(1~'2s~2~'5k~3~'8c)~easure~(1~'2s_2s~2~'
5k_5k~3~'8c_8c)))
(67 bytes)

This JSURL encoding of an optimized Trie reduces the bytes size by 10% versus a tilde-separated array.

Using a Map

Finally, if you know what your mark / measure names will be ahead of time, you may not need to transmit the actual names at all. If the set of your names is finite, and could maintain a map of name : index pairs, and only have to transmit the indexed value for each name.

Using the 3 marks and measures from before:

{
    "mark1": "2s",
    "mark2": "5k",
    "mark3": "8c",
    "measure1": "2s_2s",
    "measure2": "5k_5k",
    "measure3": "8c_8c"
}

What if we simply mapped these names to numbers 0-5:

{
    "mark1": 0,
    "mark2": 1,
    "mark3": 2,
    "measure1": 3,
    "measure2": 4,
    "measure3": 5
}

Since we no longer have to compress names via a Trie, we can go back to an optimized array. And since the size of the index is relatively small (values 0-35 fit into a single character), we can save some room by not having a dedicated character (~) that separates each index and value (timestamps).

Taking the above example, we can have each name fit into a string in this format:

[index1][timestamp1]~[index2][timestamp2]~[...]

Using our data:

02s~15k~28c~32s_2s~45k_5k~58c_8c
(32 bytes)

This structure is less than half the size of the optimized Trie (JSURL encoded).

If you have over 36 mapped name : index pairs, we can still accommodate them in this structure. Remember, at value 36 (the 37th value from 0), (36).toString(36) == 10, taking two characters. We can’t just use an index of two characters, since our assumption above is that the index is only a single character.

One way of dealing with this is by adding a special encoding if the index is over a certain value. We’ll optimize the structure to assume you’re only going to use 36 values, but, if you have over 36, we can accommodate that as well. For example, let’s use one of the final non-URI-encoded characters we have left over, the dash (-):

If the first character of an item in the array is:

  • 0-z (index values 0 – 35), that is the index value
  • -, the next two characters are the index (plus 36)

Thus, the value 0 is encoded as 0, 35 is encoded as z, 36 is encoded as -00, and 1331 is encoded as -zz. This gives us a total of 1331 mapped values we can use, all using a single or 3 characters.

So, given compressed values of:

{
    "mark1": "2s",
    "mark2": "5k",
    "mark3": "8c"
}

And a mapping of:

{
    "mark1": 36,
    "mark2": 37,
    "mark3": 1331
}

You could compress this as:

-002s~-015k~-zz8c

We now have 3 different ways of compressing our array of marks and measures.

We can even swap between them, depending on which compresses the best each time we gather UserTiming data.

Test Cases

So how do these techniques apply to some real-world (and concocted) data?

I navigated around the Alexa Top 50 (by traffic) websites, to see who’s using UserTiming (not many). I gathered any examples I could, and created some of my own test cases as well. With this, I currently have a corpus of 20 real and fake UserTiming examples.

Let’s first compare JSON.stringify() of our UserTiming data versus the culmination of all of the techniques above:

+------------------------------+
¦ Test    ¦ JSON ¦ UTC ¦ UTC % ¦
+---------+------+-----+-------¦
¦ 01.json ¦ 415  ¦ 66  ¦ 16%   ¦
+---------+------+-----+-------¦
¦ 02.json ¦ 196  ¦ 11  ¦ 6%    ¦
+---------+------+-----+-------¦
¦ 03.json ¦ 521  ¦ 18  ¦ 3%    ¦
+---------+------+-----+-------¦
¦ 04.json ¦ 217  ¦ 36  ¦ 17%   ¦
+---------+------+-----+-------¦
¦ 05.json ¦ 364  ¦ 66  ¦ 18%   ¦
+---------+------+-----+-------¦
¦ 06.json ¦ 334  ¦ 43  ¦ 13%   ¦
+---------+------+-----+-------¦
¦ 07.json ¦ 460  ¦ 43  ¦ 9%    ¦
+---------+------+-----+-------¦
¦ 08.json ¦ 91   ¦ 20  ¦ 22%   ¦
+---------+------+-----+-------¦
¦ 09.json ¦ 749  ¦ 63  ¦ 8%    ¦
+---------+------+-----+-------¦
¦ 10.json ¦ 103  ¦ 32  ¦ 31%   ¦
+---------+------+-----+-------¦
¦ 11.json ¦ 231  ¦ 20  ¦ 9%    ¦
+---------+------+-----+-------¦
¦ 12.json ¦ 232  ¦ 19  ¦ 8%    ¦
+---------+------+-----+-------¦
¦ 13.json ¦ 172  ¦ 34  ¦ 20%   ¦
+---------+------+-----+-------¦
¦ 14.json ¦ 658  ¦ 145 ¦ 22%   ¦
+---------+------+-----+-------¦
¦ 15.json ¦ 89   ¦ 48  ¦ 54%   ¦
+---------+------+-----+-------¦
¦ 16.json ¦ 415  ¦ 33  ¦ 8%    ¦
+---------+------+-----+-------¦
¦ 17.json ¦ 196  ¦ 18  ¦ 9%    ¦
+---------+------+-----+-------¦
¦ 18.json ¦ 196  ¦ 8   ¦ 4%    ¦
+---------+------+-----+-------¦
¦ 19.json ¦ 228  ¦ 50  ¦ 22%   ¦
+---------+------+-----+-------¦
¦ 20.json ¦ 651  ¦ 38  ¦ 6%    ¦
+---------+------+-----+-------¦
¦ Total   ¦ 6518 ¦ 811 ¦ 12%   ¦
+------------------------------+

Key:
* JSON      = JSON.stringify(UserTiming).length (bytes)
* UTC       = Applying UserTimingCompression (bytes)
* UTC %     = UTC bytes / JSON bytes

Pretty good, right? On average, we shrink the data down to about 12% of its original size.

In addition, the resulting data is now URL-friendly.

UserTiming-Compression.js

usertiming-compression.js (and its companion, usertiming-decompression.js) are open-source JavaScript modules (UserTimingCompression and UserTimingDecompression) that apply all of the techniques above.

They are available on Github at github.com/nicjansma/usertiming-compression.js.

These scripts are meant to provide an easy, drop-in way of compressing your UserTiming data. They compress UserTiming via one of the methods listed above, depending on which way compresses best.

If you have intimate knowledge of your UserTiming marks, measures and how they’re organized, you could probably construct an even more optimized data structure for capturing and transmitting your UserTiming data. You could also trim the scripts to only use the compression technique that works best for you.

Versus Gzip / Deflate

Wait, why did we go through all of this mumbo-jumbo when there are already great ways of compression data? Why not just gzip the stringified JSON?

That’s one approach. One challenge is there isn’t native support for gzip in JavaScript. Thankfully, you can use one of the excellent open-source libraries like pako.

Let’s compare the UserTimingCompression techniques to gzipping the raw UserTiming JSON:

+----------------------------------------------------+
¦ Test    ¦ JSON ¦ UTC ¦ UTC % ¦ JSON.gz ¦ JSON.gz % ¦
+---------+------+-----+-------+---------+-----------¦
¦ 01.json ¦ 415  ¦ 66  ¦ 16%   ¦ 114     ¦ 27%       ¦
+---------+------+-----+-------+---------+-----------¦
¦ 02.json ¦ 196  ¦ 11  ¦ 6%    ¦ 74      ¦ 38%       ¦
+---------+------+-----+-------+---------+-----------¦
¦ 03.json ¦ 521  ¦ 18  ¦ 3%    ¦ 79      ¦ 15%       ¦
+---------+------+-----+-------+---------+-----------¦
¦ 04.json ¦ 217  ¦ 36  ¦ 17%   ¦ 92      ¦ 42%       ¦
+---------+------+-----+-------+---------+-----------¦
¦ 05.json ¦ 364  ¦ 66  ¦ 18%   ¦ 102     ¦ 28%       ¦
+---------+------+-----+-------+---------+-----------¦
¦ 06.json ¦ 334  ¦ 43  ¦ 13%   ¦ 96      ¦ 29%       ¦
+---------+------+-----+-------+---------+-----------¦
¦ 07.json ¦ 460  ¦ 43  ¦ 9%    ¦ 158     ¦ 34%       ¦
+---------+------+-----+-------+---------+-----------¦
¦ 08.json ¦ 91   ¦ 20  ¦ 22%   ¦ 88      ¦ 97%       ¦
+---------+------+-----+-------+---------+-----------¦
¦ 09.json ¦ 749  ¦ 63  ¦ 8%    ¦ 195     ¦ 26%       ¦
+---------+------+-----+-------+---------+-----------¦
¦ 10.json ¦ 103  ¦ 32  ¦ 31%   ¦ 102     ¦ 99%       ¦
+---------+------+-----+-------+---------+-----------¦
¦ 11.json ¦ 231  ¦ 20  ¦ 9%    ¦ 120     ¦ 52%       ¦
+---------+------+-----+-------+---------+-----------¦
¦ 12.json ¦ 232  ¦ 19  ¦ 8%    ¦ 123     ¦ 53%       ¦
+---------+------+-----+-------+---------+-----------¦
¦ 13.json ¦ 172  ¦ 34  ¦ 20%   ¦ 112     ¦ 65%       ¦
+---------+------+-----+-------+---------+-----------¦
¦ 14.json ¦ 658  ¦ 145 ¦ 22%   ¦ 217     ¦ 33%       ¦
+---------+------+-----+-------+---------+-----------¦
¦ 15.json ¦ 89   ¦ 48  ¦ 54%   ¦ 91      ¦ 102%      ¦
+---------+------+-----+-------+---------+-----------¦
¦ 16.json ¦ 415  ¦ 33  ¦ 8%    ¦ 114     ¦ 27%       ¦
+---------+------+-----+-------+---------+-----------¦
¦ 17.json ¦ 196  ¦ 18  ¦ 9%    ¦ 81      ¦ 41%       ¦
+---------+------+-----+-------+---------+-----------¦
¦ 18.json ¦ 196  ¦ 8   ¦ 4%    ¦ 74      ¦ 38%       ¦
+---------+------+-----+-------+---------+-----------¦
¦ 19.json ¦ 228  ¦ 50  ¦ 22%   ¦ 103     ¦ 45%       ¦
+---------+------+-----+-------+---------+-----------¦
¦ 20.json ¦ 651  ¦ 38  ¦ 6%    ¦ 115     ¦ 18%       ¦
+---------+------+-----+-------+---------+-----------¦
¦ Total   ¦ 6518 ¦ 811 ¦ 12%   ¦ 2250    ¦ 35%       ¦
+----------------------------------------------------+

Key:
* JSON      = JSON.stringify(UserTiming).length (bytes)
* UTC       = Applying UserTimingCompression (bytes)
* UTC %     = UTC bytes / JSON bytes
* JSON.gz   = gzip(JSON.stringify(UserTiming)).length
* JSON.gz % = JSON.gz bytes / JSON bytes

As you can see, gzip does a pretty good job of compressing raw JSON (stringified) – on average, reducing the size of to 35% of the original. However, UserTimingCompression does a much better job, reducing to 12% of overall size.

What if instead of gzipping the UserTiming JSON, we gzip the minified timestamp map? For example, instead of:

[{"duration":0,"entryType":"mark","name":"mark1","startTime":100},
{"duration":0,"entryType":"mark","name":"mark1","startTime":150},
{"duration":0,"entryType":"mark","name":"mark1","startTime":500}]

What if we gzipped the output of compressing the timestamps?

{"mark1":"2s.1e.9q"}

Here are the results:

+-----------------------------------+
¦ Test    ¦ UTC ¦ UTC.gz ¦ UTC.gz % ¦
+---------+-----+--------+----------¦
¦ 01.json ¦ 66  ¦ 62     ¦ 94%      ¦
+---------+-----+--------+----------¦
¦ 02.json ¦ 11  ¦ 24     ¦ 218%     ¦
+---------+-----+--------+----------¦
¦ 03.json ¦ 18  ¦ 28     ¦ 156%     ¦
+---------+-----+--------+----------¦
¦ 04.json ¦ 36  ¦ 46     ¦ 128%     ¦
+---------+-----+--------+----------¦
¦ 05.json ¦ 66  ¦ 58     ¦ 88%      ¦
+---------+-----+--------+----------¦
¦ 06.json ¦ 43  ¦ 43     ¦ 100%     ¦
+---------+-----+--------+----------¦
¦ 07.json ¦ 43  ¦ 60     ¦ 140%     ¦
+---------+-----+--------+----------¦
¦ 08.json ¦ 20  ¦ 33     ¦ 165%     ¦
+---------+-----+--------+----------¦
¦ 09.json ¦ 63  ¦ 76     ¦ 121%     ¦
+---------+-----+--------+----------¦
¦ 10.json ¦ 32  ¦ 45     ¦ 141%     ¦
+---------+-----+--------+----------¦
¦ 11.json ¦ 20  ¦ 37     ¦ 185%     ¦
+---------+-----+--------+----------¦
¦ 12.json ¦ 19  ¦ 35     ¦ 184%     ¦
+---------+-----+--------+----------¦
¦ 13.json ¦ 34  ¦ 40     ¦ 118%     ¦
+---------+-----+--------+----------¦
¦ 14.json ¦ 145 ¦ 112    ¦ 77%      ¦
+---------+-----+--------+----------¦
¦ 15.json ¦ 48  ¦ 45     ¦ 94%      ¦
+---------+-----+--------+----------¦
¦ 16.json ¦ 33  ¦ 50     ¦ 152%     ¦
+---------+-----+--------+----------¦
¦ 17.json ¦ 18  ¦ 37     ¦ 206%     ¦
+---------+-----+--------+----------¦
¦ 18.json ¦ 8   ¦ 23     ¦ 288%     ¦
+---------+-----+--------+----------¦
¦ 19.json ¦ 50  ¦ 53     ¦ 106%     ¦
+---------+-----+--------+----------¦
¦ 20.json ¦ 38  ¦ 51     ¦ 134%     ¦
+---------+-----+--------+----------¦
¦ Total   ¦ 811 ¦ 958    ¦ 118%     ¦
+-----------------------------------+

Key:
* UTC     = Applying full UserTimingCompression (bytes)
* TS.gz   = gzip(UTC timestamp compression).length
* TS.gz % = TS.gz bytes / UTC bytes

Even with pre-applying the timestamp compression and gzipping the result, gzip doesn’t beat the full UserTimingCompression techniques. Here, in general, gzip is 18% larger than UserTimingCompression. There are a few cases where gzip is better, notably in test cases with a lot of repeating strings.

Additionally, applying gzip requires your app include a JavaScript gzip library, like pako — whose deflate code is currently around 26.3 KB minified. usertiming-compression.js is much smaller, at only 3.9 KB minified.

Finally, if you’re using gzip compression, you can’t just stick the gzip data into a Query String, as URL encoding will increase its size tremendously.

If you’re already using gzip to compress data, it’s a decent choice, but applying some domain-specific knowledge about our data-structures give us better compression in most cases.

Versus MessagePack

MessagePack is another interesting choice for compressing data. In fact, its motto is “It’s like JSON. but fast and small.“. I like MessagePack and use it for other projects. MessagePack is an efficient binary serialization format that takes JSON input and distills it down to a minimal form. It works with any JSON data structure, and is very portable.

How does MessagePack compare to the UserTiming compression techniques?

MessagePack only compresses the original UserTiming JSON to 72% of its original size. Great for a general compression library, but not nearly as good as UserTimingCompression can do. Notably, this is because MessagePack is retaining the JSON strings (e.g. startTime, duration, etc) for each UserTiming object:

+--------------------------------------------------------+
¦         ¦ JSON ¦ UTC ¦ UTC % ¦ JSON.pack ¦ JSON.pack % ¦
+---------+------+-----+-------+-----------+-------------¦
¦ Total   ¦ 6518 ¦ 811 ¦ 12%   ¦ 4718      ¦ 72%         ¦
+--------------------------------------------------------+

Key:
* UTC         = Applying UserTimingCompression (bytes)
* UTC %       = UTC bytes / JSON bytes
* JSON.pack   = MsgPack(JSON.stringify(UserTiming)).length
* JSON.pack % = TS.pack bytes / UTC bytes

What if we just MessagePack the compressed timestamps? (e.g. {"mark1":"2s.1e.9q", ...})

+---------------------------------------+
¦ Test    ¦ UTC ¦ TS.pack  ¦ TS.pack %  ¦
+---------+-----+----------+------------¦
¦ 01.json ¦ 66  ¦ 73       ¦ 111%       ¦
+---------+-----+----------+------------¦
¦ 02.json ¦ 11  ¦ 12       ¦ 109%       ¦
+---------+-----+----------+------------¦
¦ 03.json ¦ 18  ¦ 19       ¦ 106%       ¦
+---------+-----+----------+------------¦
¦ 04.json ¦ 36  ¦ 43       ¦ 119%       ¦
+---------+-----+----------+------------¦
¦ 05.json ¦ 66  ¦ 76       ¦ 115%       ¦
+---------+-----+----------+------------¦
¦ 06.json ¦ 43  ¦ 44       ¦ 102%       ¦
+---------+-----+----------+------------¦
¦ 07.json ¦ 43  ¦ 43       ¦ 100%       ¦
+---------+-----+----------+------------¦
¦ 08.json ¦ 20  ¦ 21       ¦ 105%       ¦
+---------+-----+----------+------------¦
¦ 09.json ¦ 63  ¦ 63       ¦ 100%       ¦
+---------+-----+----------+------------¦
¦ 10.json ¦ 32  ¦ 33       ¦ 103%       ¦
+---------+-----+----------+------------¦
¦ 11.json ¦ 20  ¦ 21       ¦ 105%       ¦
+---------+-----+----------+------------¦
¦ 12.json ¦ 19  ¦ 20       ¦ 105%       ¦
+---------+-----+----------+------------¦
¦ 13.json ¦ 34  ¦ 33       ¦ 97%        ¦
+---------+-----+----------+------------¦
¦ 14.json ¦ 145 ¦ 171      ¦ 118%       ¦
+---------+-----+----------+------------¦
¦ 15.json ¦ 48  ¦ 31       ¦ 65%        ¦
+---------+-----+----------+------------¦
¦ 16.json ¦ 33  ¦ 40       ¦ 121%       ¦
+---------+-----+----------+------------¦
¦ 17.json ¦ 18  ¦ 21       ¦ 117%       ¦
+---------+-----+----------+------------¦
¦ 18.json ¦ 8   ¦ 11       ¦ 138%       ¦
+---------+-----+----------+------------¦
¦ 19.json ¦ 50  ¦ 52       ¦ 104%       ¦
+---------+-----+----------+------------¦
¦ 20.json ¦ 38  ¦ 40       ¦ 105%       ¦
+---------+-----+----------+------------¦
¦ Total   ¦ 811 ¦ 867      ¦ 107%       ¦
+---------------------------------------+

Key:
* UTC       = Applying full UserTimingCompression (bytes)
* TS.pack   = MsgPack(UTC timestamp compression).length
* TS.pack % = TS.pack bytes / UTC bytes

For our 20 test cases, MessagePack is about 7% larger than the UserTiming compression techniques.

Like using a JavaScript module for gzip, the most popular MessagePack JavaScript modules are pretty hefty, at 29.2 KB, 36.9 KB, and 104 KB. Compare this to only 3.9 KB minified for usertiming-compression.js.

Basically, if you have good domain-specific knowledge of your data-structures, you can often compress better than a general-case minimizer like gzip or MessagePack.

Conclusion

It’s fun taking a data-structure you want to work with and compressing it down as much as possible.

UserTiming is a great (and under-utilized) browser API that I hope to see get adopted more. If you’re already using UserTiming, you might have already solved the issue of how to capture, transmit and store these datapoints. If not, I hope these techniques and tools will help you on your way towards using the API.

Do you have ideas for how to compress this data down even further? Let me know!

usertiming-compression.js (and usertiming-decompression.js) are available on Github.

Resources

Forensic Tools for In-Depth Performance Investigations

October 15th, 2015

Another talk Philip Tellis and I gave at Velocity New York 2015 was about the forensic tools we use for investigating performance issues.  Check it out on Slideshare:

forensic-tools-for-in-depth-performance-investigations

In this talk, we cover a variety of tools such as WebPagetest, tcpdump, Wireshark, Cloudshark, browser developer tools, Chrome tracing, netlog, Fiddler, RUM, TamperMonkey, NodeJS, virtualization, Event Tracing for Windows (ETW), xperf and more while diving into real issues we’ve had to investigate in the past.

The talk is also available on YouTube.