Compressing ResourceTiming

November 7th, 2014

At SOASTA, we’re building tools and services to help our customers understand and improve the performance of their websites. Our mPulse product utilizes Real User Monitoring to capture data about page-load performance.

For browser-side data collection, mPulse uses Boomerang, which beacons every single page-load experience back to our real time analytics engine. Boomerang utilizes NavigationTiming when possible to relay accurate performance metrics about the page load, such as the timings of DNS, TCP, SSL and the HTTP response.

ResourceTiming is another important feature in modern browsers that gives JavaScript access to performance metrics about the page’s components fetched from the network, such as CSS, JavaScript and images. mPulse will soon be releasing a new feature that lets our customers view the complete waterfall of every visitor’s session, which can be a tremendous help in debugging performance issues.

The challenge with ResourceTiming is that it offers a lot of data if you want to beacon it all back to a server. For each resource, there’s data on:

  • URL
  • Initiating element (eg IMG)
  • Start time
  • Duration
  • Plus 11 other timestamps

Here’s an example of performance.getEntriesByType('resource') of a single resource:

{"responseEnd":2436.426999978721,"responseStart":2435.966999968514,
"requestStart":2435.7460000319406,"secureConnectionStart":0,
"connectEnd":2434.203000040725,"connectStart":2434.203000040725,
"domainLookupEnd":2434.203000040725,"domainLookupStart":2434.203000040725,
"fetchStart":2434.203000040725,"redirectEnd":0,"redirectStart":0,
"initiatorType":"internal","duration":2.2239999379962683,
"startTime":2434.203000040725,"entryType":"resource","name":"http://nicj.net/"}

JSON.stringify()‘d, that’s 469 bytes for this one resource.  Multiple that by each resource on your page, and you can quickly see that gathering and beaconing all of this data back to a server will take a lot of bandwidth and storage if you’re tracking this for every single visitor to your site. The HTTP Archive tells us that the average page is composed of 99 HTTP resources, with an average URL length of 85 bytes.

So for a rough estimate you could expect around 45 KB of ResourceTiming data per page load.

The Goal

We wanted to find a way to compress this data before we JSON serialize it and beacon it back to our server.

Philip Tellis, the author of Boomerang, and I have come up with several compression techniques that can reduce the above data to about 15% of it’s original size.

Techniques

Let’s start out with a single resouce, as you get back from window.performance.getEntriesByType("resource"):

{  
  "responseEnd":323.1100000002698,
  "responseStart":300.5000000000000,
  "requestStart":252.68599999981234,
  "secureConnectionStart":0,
  "connectEnd":0,
  "connectStart":0,
  "domainLookupEnd":0,
  "domainLookupStart":0,
  "fetchStart":252.68599999981234,
  "redirectEnd":0,
  "redirectStart":0,
  "duration":71.42400000045745,
  "startTime":252.68599999981234,
  "entryType":"resource",
  "initiatorType":"script",
  "name":"http://foo.com/js/foo.js"
}

Step 1: Drop some attributes

We don’t need:

  • entryType will always be resource
  • duration can always be calculated as responseEnd - startTime.
  • fetchStart will always be startTime (with no redirects) or redirectEnd (with redirects)
{  
  "responseEnd":323.1100000002698,
  "responseStart":300.5000000000000,
  "requestStart":252.68599999981234,
  "secureConnectionStart":0,
  "connectEnd":0,
  "connectStart":0,
  "domainLookupEnd":0,
  "domainLookupStart":0,
  "redirectEnd":0,
  "redirectStart":0,
  "startTime":252.68599999981234,
  "initiatorType":"script",
  "name":"http://foo.com/js/foo.js"
}

Step 2: Change into a fixed-size array

Since we know all of the attributes ahead of time, we can change the object into a fixed-sized array. We’ll create a new object where each key is the URL, and its value is a fixed-sized array. We’ll take care of duplicate URLs later:

{ "name": [initiatorType, startTime, redirectStart, redirectEnd,
   domainLookupStart, domainLookupEnd, connectStart, secureConnectionStart, 
   connectEnd, requestStart, responseStart, responseEnd] }

With our data:

{ "http://foo.com/foo.js": ["script", 252.68599999981234, 0, 0
   0, 0, 0, 0, 
   0, 252.68599999981234, 300.5000000000000, 323.1100000002698] }

Step 3: Drop microsecond timings

For our purposes, we don’t need sub-milliscond accuracy, so we can round all timings to the nearest millisecond:

{ "http://foo.com/foo.js": ["script", 252, 0, 0, 0, 0, 0, 0, 0, 252, 300, 323] }

Step 4: Trie

We can now use an optimized Trie to compress the URLs. A Trie is an optimized tree structure where associative array keys are compressed.

Mark Holland and Mike McCall discussed this technique at Velocity this year.

Here’s an example with multiple resources:

{
    "http://": {
        "foo.com/": {
            "js/foo.js": ["script", 252, 0, 0, 0, 0, 0, 0, 0, 252, 300, 323]
            "css/foo.css": ["css", 300, 0, 0, 0, 0, 0, 0, 0, 305, 340, 500]
        },
        "other.com/other.css": [...]
    }
}

Step 5: Offset from startTime

If we offset all of the timestamps from startTime (which they should always be larger than), they may use fewer characters:

{
    "http://": {
        "foo.com/": {
            "js/foo.js": ["script", 252, 0, 0, 0, 0, 0, 0, 0, 0, 48, 71],
            "css/foo.css": ["script", 300, 0, 0, 0, 0, 0, 5, 40, 200]
        },
        "other.com/other.css": [...]
    }
}

Step 6: Reverse the timestamps and drop any trailing 0s

The only two required timestamps in ResourceTiming are startTime and responseEnd. Other timestamps may be zero due to being a Cross-Origin resource, or a timestamp that was “zero” because it didn’t take any time offset from startTime, such as domainLookupStart if DNS was already resolved.

If we re-order the timestamps so that, after startTime, we put them in reverse order, we’re more likely to have the “zero” timestamps at the end of the array.

{ "name": [initiatorType, startTime, responseEnd, responseStart,
   requestStart, connectEnd, secureConnectionStart, connectStart,
   domainLookupEnd, domainLookupStart, redirectEnd, redirectStart] }
{
    "http://": {
        "foo.com/": {
            "js/foo.js": ["script", 252, 71, 48, 0, 0, 0, 0, 0, 0, 0, 0, 0]
            "css/foo.css": ["script", 300, 200, 40, 5, 0, 0, 0, 0, 0, 0, 0, 0]
        }
    }
}

Once we have all of the zero timestamps towards the end of the array, we can drop any repeating trailing zeros. When reading later, missing array values can be interpreted as zero.

{
    "http://": {
        "foo.com/": {
            "js/foo.js": ["script", 252, 71, 48]
            "css/foo.css": ["css", 300, 200, 40]
        }
    }
}

Step 7: Convert initiatorType into a lookup

Using a numeric lookup instead of a string will save some bytes for initiatorType:

var INITIATOR_TYPES = {
    "other": 0,
    "img": 1,
    "link": 2,
    "script": 3,
    "css": 4,
    "xmlhttprequest": 5
};
{
    "http://": {
        "foo.com/": {
            "js/foo.js": [3, 252, 71, 48]
            "css/foo.css": [4, 300, 200, 40]
        }
    }
}

Step 8: Use Base36 for numbers

Base 36 is convenient because it can result in smaller byte-size than Base-10 and has built-in browser support in JavaScript toString(36):

{
    "http://": {
        "foo.com/": {
            "js/foo.js": [3, "70", "1z", "1c"]
            "css/foo.css": [4, "8c", "5k", "14"]
        }
    }
}

Step 9: Compact the array into a string

A JSON string representation of an array (separated by commas) saves a few bytes during serialization. We’ll designate the first byte as the initiatorType:

{
    "http://": {
        "foo.com/": {
            "js/foo.js": "370,1z,1c",
            "css/foo.css": "48c,5k,14"
        }
    }
}

Step 10: Multiple hits

Finally, if there are multiple hits to the same resource, the keys (URLs) in the Trie will conflict with each other.

Let’s fix this by concatenating multiple hits to the same URL via a special character such as pipe | (see foo.js below):

{
    "http://": {
        "foo.com/": {
            "js/foo.js": "370,1z,1c|390,1,2",
            "css/foo.css": "48c,5k,14"
        }
    }
}

Step 11: Gzip or MsgPack

Applying gzip compression or MsgPack can give additional savings during transport and storage.

Results

Overall, the above techniques compress raw JSON.stringify(performance.getEntriesByType('resource')) to about 15% of its original size.

Taking a few sample pages:

  • Search engine home page
    • Raw: 1,000 bytes
    • Compressed: 172 bytes
  • Questions and answers page:
    • Raw: 5,453 bytes
    • Compressed: 789 bytes
  • News home page
    • Raw: 32,480 bytes
    • Compressed: 4,949 bytes

How-To

These compression techniques have been added to the latest version of Boomerang.

I’ve also released a small library that does the compression as well as de-compression of the optimized result: resourcetiming-compression.js.

Sails.js Intro

August 8th, 2014

Last night at GrNodeDev I gave a small presentation on Sails.js, an awesome Node.js web framework built on top of Express.

Slides are available on Slideshare and Github:

Sails.js Intro Slides

Spark Core Water Sensor

June 20th, 2014

Spark Core Water Sensor

I’ve been playing around with a Spark Core, which is a small, cheap ($39) Wifi-enabled Arduino-compatible device.  As a software guy, I don’t do much with hardware, but the Spark Core makes it really easy to get going.

Anyways, I was able to hook up a Grove Water Sensor to it, and have it mounted near my sump pump.  If the pump would ever go out, the device will send me a text message (via Twilio) to alert me.

The whole setup is pretty simple, and I’ve put all the relevant firmware and software up on Github in case anyone else is interested in doing something similar.  Total hardware cost was $42, and just $1/month for Twilio.

Appcelerator Titanium Intro (2014)

June 6th, 2014

I was part of a panel during last night’s GrDevNight discussing cross-platform mobile development.  Afterwards, I gave a short presentation on Appcelerator:

Appcelerator Titanium Intro

The Happy Path: Migration Strategies for Node.js

May 5th, 2014

Today Brian Anderson, Jason Sich and I gave a presentation at GLSEC 2014 titled The Happy Path: Migration Strategies for Node.js.

It is available on Slideshare:

The Happy Path: Migration Strageies for Node.js

The presentation and code examples are also available on Github.

adblock-detector.js

March 22nd, 2014

I run advertising on several of my websites, mostly through Google AdSense. My sites are free communities that don’t otherwise sell products, so advertising is the main way I cover operational expenses. AdSense has been a great partner over the years and the ads they serve aren’t too obtrusive.

However, I realize that many people see all advertising as annoying, and some run ad-blockers in their browser to filter out ads. AdBlock Plus and others are becoming more popular every year.

Since advertising is such an important part of my business, I wanted to try to quantify what percentage of ads were being hidden by my visitor’s ad-blockers. I did a bit of testing to determine how to detect if my ads were being blocked, then ran an experiment on two of my sites. The first site, with a travel focus, saw approximately 9.4% of ads being blocked by visitors. The second site, with a gaming focus, had over 26% of ads blocked.  The industry average is around 23%.

While the ad-block rates are fairly high, I’m honestly not upset or surprised by the results. Generally, people that have an ad-blocker installed won’t be the kind of audience that is likely to click on an ad. In fact, I often run an ad-blocker myself. However, knowing which visitors have blocked the ads gives me an important metric to track in my analytics. It also offers me the opportunity to give one last plea to the visitor by subtly asking them to support the site via donations if they visit often.

What I don’t want to do is annoy any visitors that are using ad-blockers with my plea, but I do think there’s an opportunity, if you’re respectful with your request, to gently suggest to the visitor an alternate method of supporting the site. Below are screenshots of what sarna.net looks like if you visit with an ad-blocker installed.

sarna-ad-blocker-full

Zoomed in, you can see I provide the visitor alternate means of supporting the site, as well as a way to disable the message for 100 days if they find it annoying:

sarna-ad-blocker-zoomed

Since this prompt is text-only and a muted color, I feel that it is an unobtrusive, respectful way of reaching out to the visitor.  So far, I haven’t had any complaints about the new prompt — and I’ve had a few donations as well.  A very small percentage click on the “hide this message…” link.

The logic to detect ad-blocking is fairly straightforward, though there are a few caveats when detecting cross-browser.  Other sites might find it useful, so I’ve packaged it up into a new module called adblock-detector.js.  I’ve only tested it in a limited environment (IE, Chrome and Firefox with AdBlock Plus), so I’m looking for help from others that can test other browsers, browser versions, ad-blockers and ad publishers.

You can use adblock-detector.js to collect metrics on your ad-block rate, or to appeal to your visitors as I’m doing.  I provide examples for both in the repository.

Please use the knowledge gained for good (eg. analytics, subtle prompts), not evil (eg. more ads).

If you want a fully-baked solution, I would also recommend PageFair, which can help you track your ad-block rate, and more.

adblock-detector.js is free, open-source, and available on Github

Using Phing for Fun and Profit

February 12th, 2014

I gave a small presentation on using Phing as a PHP build system for GrPhpDev on 2014-02-11. It’s available on SlideShare:

Using Phing for Fun and Profit

The presentation and examples are also available on Github.

ChecksumVerifier – A Windows Command-Line Tool to Verify the Integrity of your Files

February 4th, 2014

Several years ago I wrote a small tool called ChecksumVerifier. It maintains a database of files and their checksums, and helps you verify that the files have not changed. I use it on my external hard drive backups to validate that the files are not being corrupted due to bitrot or other disk corruption.  At the time I created it, there were several other simple and commercial Windows apps that would do the same thing, but nothing was free and command-line based.  I’ve been meaning to clean it up so I could open-source it, which I finally had the time to do last weekend.

A checksum is a small sequence of 20-200 characters (depending on the specific algorithm used) that is calculated by reading the input file and applying a mathematical algorithm to its contents. Even a file as large as 1TB will only have a small 20-200 character checksum, so checksums are an efficient way of saving the file’s state without saving it’s entire contents. ChecksumVerifier uses the MD5, SHA-1, SHA-256 and SHA-512 algorithms, which are generally collision resistant enough for validating the integrity of file contents.

One example usage of ChecksumVerifier is to verify the integrity of external hard drive backups. After saving files to an external disk, you can run ChecksumVerifier -update to calculate the checksums of all of the files on the external disk. At a later date, if you want to validate that the files on the disk have not been added, removed or changed, you can run ChecksumVerifier -verify and it will re-calculate all of the disks’ checksums and compare them to the original database to see if any files have been changed in any way.

ChecksumVerifier is pretty flexible and has several command line options:

Usage: ChecksumVerifier.exe [-update | -verify] -db [xml file] [options]

actions:
     -update:                Update checksum database
     -verify:                Verify checksum database

required:
     -db [xml file]          XML database file

options:
     -match [match]          Files to match (glob pattern such as * or *.jpg or ??.foo) (default: *)
     -exclude [match]        Files to exclude (glob pattern such as * or *.jpg or ??.foo) (default: empty)
     -basePath [path]        Base path for matching (default: current directory)
     -r, -recurse            Recurse (directories only, default: off)

path storage options:
     -relativePath           Relative path (default)
     -fullPath               Full path
     -fullPathNodrive        Full path - no drive letter

checksum options:
     -md5                    MD5 (default)
     -sha1                   SHA-1
     -sha256                 SHA-2 256 bits
     -sha512                 SHA-2 512 bits

-verify options:
     -ignoreMissing          Ignore missing files (default: off)
     -showNew                Show new files (default: off)
     -ignoreChecksum         Don't calculate checksum (default: off)

-update options:
     -removeMissing          Remove missing files (default: off)
     -ignoreNew              Don't add new files (default: off)
     -pretend                Show what would happen - don't write out XML (default: off)

ChecksumVerifier is free, open-source and available on github.

Minifig Collector v11.0

October 8th, 2013

My original Minifig Collector app (which was the first Android app I ever created), which has seen over 150,000 installs, just got a major facelift and some new features version 11.0.  It now has a more modern-looking UI, can import/export your figures to Brickset, and let’s you finger-swipe back and forth. Check it out!

Screenshots:

main list
browse brickset

Unofficial LEGO® Minifigure Catalog v2.0

September 13th, 2013

Over the past few weeks I’ve been working on a new version 2.0 of the Unofficial LEGO® Minifigure Catalog app. We’ve just released the version 2.0 to the Apple iTunes and Google Play App stores.

Version 2.0 introduces tablet support along with a complete visual facelift. In addition, there are several performance improvements that make the app much faster when browsing, and I’ve added the ability to browse minifigures, sets and heads by name (in addition to by year and theme).

all-3

Check it out!

Note: LEGO® is a trademark of the LEGO Group of companies which does not sponsor, authorize or endorse this app.