Optimizing openSpaceBook

A few days ago I gave a presentation at San Jose State University on how to speed up your website’s frontend performance, which, according to many, is responsible for 80-90% of a page’s load time.

Rather than throw up a bunch of slides and blab on about expires headers, gzipping and concatenating, I created an awesome social networking website called openSpaceBook. Go ahead and check it out, any login will work.

Now that you’ve realized you should cancel your Facebook account because this is going to be awesome, you can learn how it is pretty slow to load and how we can speed it up.

According to webpagetest.org, a handy-dandy webpage loading tool, it takes about 12 seconds for openSpaceBook’s home.html page to load on DSL. 12 seconds! And that’s just for a static html page, nothing going on in the backend. WTF is taking so long?

This is what’s taking so long:

Waterfall view of requests

26 HTTP requests, 773KB! Rendering doesn’t start until 12 seconds into downloading the page! (Thin vertical blue green line.)

Don’t think this page is bloated compared to other websites. Facebook has 182 requests and has 500KB in assets (down from over 1MB a few months ago). YouTube has 42 requests, 356KB. And those sites have most likely been optimized, openSpaceBook is not.

So I have this bloated website, but I can’t remove any features because I’ll lose the ‘ooh shiny!’ audience that I’m trying to attract. What should I do?

Step 1: Reduce HTTP requests

The first thing that can be done is reducing the number of HTTP requests. There are 3 reasons for this. One is every HTTP request has a small amount of network overhead, and the more requests you remove, the less overall network traffic.

The second reason is browsers have a limit to the # of HTTP requests they can make to a webserver (or hostname, specifically). Older browsers were limited to 2. Some newer browsers have upped that limit to 6 or 8, but they still have a limit. Therefore when a browser has reached its limit, it has to wait for requests to finish before starting up new ones. So the more requests necessary, the more queuing will occur.

The third reason is specific to JavaScript in that browsers will only download and execute 1 JavaScript file at a time. This is done because JavaScript can modify the DOM, redirect the page or do any number of things that may affect what resources need to be downloaded. So even if a browser can download 8 requests in parallel, JavaScript files will still be download sequentially. (There are efforts to improve this issue in Webkit and Gecko.)

So lets see how we fare after reducing our HTTP requests. I did this manually by copying and pasting all of my CSS files into 1 file and JS files into 1 file. I also took all my CSS background images and merged them into one. Then I can use CSS background positioning to only show the graphic I need.

Results after combining files

waterfall after combining files

Shazam! Down to 5.8s! That’s a 50% increase in speed for less than 30 minutes worth of work. And 8KB has been shaved off of the page. Sweet. On to our next step.

Step 2: Set expires headers

Another simple trick that requires a few lines of code is setting expires headers. Expires headers are a type of header that tells the browser when an asset ‘expires’ from its cache. When you set it to years in the future, a browser will cache it and never ask the website again for it.

Expires headers look like this:

Expires: Thu, 15 Apr 2020 20:00:00 GMT

The assets you want to set expires headers on are things that don’t change much. Like images, CSS and JavaScript. I know, CSS & JavaScript might change every week or two and this can result in your users never seeing new content. The way around this is to change the filename of your assets whenever you update them. This can be done manually or with a build script of some sort.

You can set expires headers in Apache by adding this to your httpd.conf or .htaccess file:

<FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$">
Header set Expires "Thu, 15 Apr 2020 20:00:00 GMT"
</FilesMatch>

So let’s see how we fare:
waterfall with expires headers

Hmm, 6 seconds. No improvement. Not surprising, we didn’t really change anything. In fact, we actually added an extra header per request. The extra 0.2 seconds could be due to this or due to varying network latency or utilization.

What was improved was repeat response time. Down from 1.3 seconds to 0.6 seconds. And this performance improvement is carried over to all other pages. Any previously cached assets will not be downloaded again or checked to see if there is a newer version (If-Modified-Since request). So a user’s multiple page view experience will be significantly faster.

So what else can be done to really improve performance?

Step 3: Gzip your content

Now we’re talking. Compressing your content is an easy way to significantly reduce the amount of data transmitted. HTML, CSS & JS are all text, so they compress very well. So let’s see what kind of performance increase we get when we compress our content.

waterfall of compressed content

Yes! Down to 2.7 seconds! And we’ve cut the amount of data down to 223KB, which equates to $$$ saved when you pay for bandwidth.

To gzip your content in Apache, you can add this to your httpd.conf or .htaccess file:

<IfModule mod_deflate.c>
<FilesMatch "\.(js|css|html)$">
SetOutputFilter DEFLATE
</FilesMatch>
</IfModule>

Step 4: Move JavaScript to the bottom

Remember way back in step #1 when I said browsers only download 1 JavaScript file at a time? Well, they also block rendering of any content after them in the DOM. So when your JavaScript is referenced at the top of the page, like most webpages do, it blocks the rest of the page. The solution? Move them below all your content!

Here’s what happens when you move your JavaScript to the bottom of the page:

waterfall of JavaScript at the bottom of the page

So, down to 2.4 seconds (even though the amount of data is the same, discrepancy most likely due to network latency or bandwidth fluctuations between tests), but the rendering of the page starts at 0.9 seconds as opposed to 2.3 seconds in step 3. This makes the page appear to load faster because the browser will begin to show content before it has finished downloading the JavaScript file at the end of the page.

So, we’re down to 2.4 seconds, can we do any better? You’re darn tootin’!

Step 5: Minify CSS & JS

Hang in there, we’re at the last step. Here’s another way to reduce the amount of content sent even more. Minify your CSS and JS. Basically this means removing all newlines, comments and unnecessary spaces. It can also include programmatically replacing variable names in JavaScript with shorter ones.

The easiest way to do this is to use a compressor like YUI Compressor. It’s a Java app that takes a CSS or JavaScript file and spits out a minified version.

So how well does openFaceBook do after minifying?

waterfall after minifying content

2.2 seconds, not bad. Down to 154KB too. So openFaceBook has gone from 12 seconds and 773KB of data to 2.2 seconds and 154KB. That’s a 81% improvement in load time and a 80% decrease in page weight. And all those steps took me about an hour or two to do. That’s a pretty good ROI for a few hours worth of work.

Summary

Speed is everything. It’s a feature that is often left out of PRDs and users’ thoughts about websites, but it’s there. And they notice on a subliminal level when your site is slow. With all the hoopla about how browsers are getting faster and faster, how fast you can deliver content to your users becomes more and more important.

Resources