rob cherny.com

About

My name is Rob Cherny and I'm a professional Web Developer with 16 years of experience creating Web sites and Web-based applications. This site is where I write about my work, and random other things...

Learn More

Loading...

Close Tab

Web Development

On HTTP: Page Load Times, Multiple File Requests, and Deferred JavaScript

One of the things that I’ve heard about time and time again is about reducing page load times by reducing the number of HTTP requests. I’ve been a little skeptical and discounted it as trivial gains vs. maintenance considerations, but there’s been more and more actual scientific research in this area.

It came up time and time again at the Ajax Experience conference, particularly from Yahoo!‘s Nate Koechley. Well, they’ve started to post the results of the tests he was going on about to their blog:

Keep an eye out for part II. They’re actually not the first to publish some findings:

Multiple CSS Files and the @Import

The trade off of maintenance vs. file load times is a serious consideration I think, but I also think it’s worth considering going forwards as far as something to emphasize in a “best practices” approach to bring to clients.

I find myself this AM staring at a CSS folder of, count them, 14 CSS files and thinking, wow, that’s a lotta HTTP going on. Granted, they’re not all loaded at one time, but even if you could cut that in half, you’d be winning something. The point behind the groups of files was easy: a home file imports design and layout, and then the home design and layout files. The drill only imports the design and layout. The Investment site loads the design and layout, and then a IR file. But there’s only one link tag, so there’s an @Import file. I love this, it’s so flexible.

In a CMS world where you might also be able to change what the core file loads by simply changing the @import file, never touching the TEMPLATE files or republishing.

These things have been my motivators. So flexible.

CSS Sprites, File Merges, Etc.

The other things you might have heard about was CSS sprites, or using many graphics in a single image and clipping to show only the portion of the image that’s important. Again, what a pain for maintenance, but, the payoff might be half again as many images requested over HTTP.

One of the things that Nate Koechley suggested was using, as part of a build process for deployment scripts which merge your multiple files. We don’t really use build processes at work, but there’s plenty of free text file merge utilities out there:

Ideally you might have a tool which is able to then split the files as well. Looking around it looks like there’s several available.

None of this solves the gains described above by having @import files, but maybe there’s some savings to be had.

There’s some others too, not just on CSS.

Multiple JavaScript Files, Deferred Loading, Etc.

Douglas Crockford, general JavaScript guru and also from Yahoo!, has also pointed out some more savings to be had: JavaScript files are loaded in sequence, in the meantime everything else is deferred and the browser effectively stalls while loading them. So, they’re loaded, in order, and nothing else is loaded during that time.

I was amazed while having the privilege to Beta Test the new Firebug 1.0 to witness this first hand with it’s new network tools. Below you can see a sample of loading the NavigationArts home page the order of the files loading and in fact the script files loading in order, while nothing else was loading.

Firebug showing network connections

Doug Crockford’s actual advise was to place all JavaScript at the end of the document as opposed to the head. This actually might solve several problems (DOM Content Loaded anyone?) and indeed may speed up the performance of the page load. If you check out a few of the Yahoo! properties, they’re actually starting to do that in places. Now this does go contrary to general practices…

Another concept I’ve become enamored with is the notion of “Lazy Loading” or On-Demand JavaScript. Conceptually speaking, this is the idea of being able to load scripts as needed, when their features are demanded, through techniques similar to Ajax.

I’m sure there’s more lessons to be learned here, especially on the trade offs.

Conclusions In Summary

So there’s a number of things mentioned to reduce http requests and speed page load times. For the full list, check out the articles linked at the top of the article. But, in brief my favorites:

  • Reduce the number of CSS files
  • Reduce the number of image files by using CSS sprites
  • Reduce the number of JavaScript files
  • Load the JavaScript files at the bottom of the document as opposed to the head
  • Consider Lazy Loading, or On Demand script file loading when you can

Nov 29, 11:10 AM in Web Development (filed under Web101, Browsers)

  1. nate koechley    Nov 29, 11:16 PM    #

    Hi Rob,

    As we build richer web interfaces, performance will be increasingly important. You’ve done a great job in this post summarizing and discussing helpful approaches. Thanks!

    I’m envious that you’ve been playing with Firebug already. A few friends have beta copies, and they are all members of the praise-singing choir. Can’t wait til it drops.

    Take care,
    Nate

    Nate Koechley
    Yahoo! Inc.
    http://developer.yahoo.com/yui
    http://yuiblog.com | http://nate.koechley.com/blog

  2. rob    Nov 30, 11:26 AM    #

    Hey Nate, thanks for posting. Your and Doug’s talks at the Ajax Experience in Boston very much got me thinking along these lines, so I have to send it back towards the work you guys are doing at Yahoo! — thanks for spreading the word. You’re definitely leading the charge on making the UI layer prime time stuff, which I appreciate enormously.

    By the way, I really did enjoy your talks. You might remember me as the guy lingering with my laptop at the end of your second presentation while you were talking to the fellow from AOL. Just sorry I didn’t make it to Refresh 06. Oh well.

    Firebug 1.0 is exceptional. I’ve had tons of fun playing with it. I’m sure if you asked Joe Hewitt nicely, he’d probably let you in. :-)

  3. Franklin Davis    Dec 5, 05:00 PM    #

    Hi, Rob—

    Another reason to be increasingly concerned about the number of HTTP requests and deferred loading is the rapid growth of mobile access to full Web. We at Nokia Boston build the S60 Web Browser, which is basically the Apple Safari browser — but on a 200 MHz ARM processor, memory limited to 10-15 MB, and very high-latency, low-bandwidth networks. Back to the future!

    We have taken the unusual step of laying out the HTML in a first pass that skips external CSS and Script files, then refreshing the layout when the external files are loaded. It’s not an ideal experience — a “blink” as the page redraws, losing your scroll position — but the alternative is to wait until everything is loaded, and that is just too long on a typical cellular data connection. And often users can quickly find a link they want during the initial HTML view, saving the rest of the loading.

    (GPRS “2G” and EDGE “2.5G” have about 900 ms latency per request which is the main culprit; typical transfer speeds of 40-60 KBps add to the problem. UMTS “3G” and HSDPA “3.5G” lower latency to around 150 or 200 ms and increase speed a lot, which will help.)

    So we are looking into what is the optimal structure for a full page to ensure fast display without requiring the double-layout, but without delaying “time to first display”. Avoiding multiple external files, and especially “chained” files is our most important finding so far.

    Sounds like these tools will help improve the analysis. Interested in anyone else’s experience & analysis on mobile.

    Regards,

    —Franklin Davis
    Nokia Inc.
    Head of Business Development, S60 Browsing

  4. rob    Dec 5, 09:27 PM    #

    Hey Franklin — I actually didn’t know that Nokia was working on a Safari-based browser. Very cool. I’ll keep an eye on that.

    Sounds like the biggest disadvantage is in fact loss of that scroll position, but you need to do what you can to optimize the experience.

    Mobile is only going to improve and people thinking about it and analyzing the way things work will be the ones to lead the charge.

    Best of luck!

  5. Diego Perini    Dec 7, 12:34 PM    #

    Franklin,
    I have a 6600 and 6680 and have done some work on these phone with my DOMComplete event handler, unfortunately I did not test the embedded WebKit browser a lot. Mostly I tested Opera and I could get my Tips work on the phone too. And that was what really pushed me in investigating further the page load process.

    To fill your request as short as I can in a blog, avoid “document.write”, and do not have scripts in the body of the page, put them in the head section of the document, if possible include the minimal JS to startup in the head section and the make only one file of your external js to avoid several overlapping requests.

    Again a lot of tests (not finished yet) are on my test site at:

    http://javascript.nwbox.com/DOMComplete-MINI/

    Look there, I will be glad to hear your opinion/suggestion if you whish to.

  6. dan    Jan 5, 04:32 PM    #

    I always wonder, If i load 100Kb in 5 requests and I load 100Kb in 1 request, whats the performance difference? probably minimal?

  7. rob    Jan 5, 04:56 PM    #

    Dan,

    I don’t think there’s an absolute calculation you can use, honestly. Between different networks, software, types of sites, and hardware, all bets on consistency are off.

    However, there’s places to start. Over dial-up, 100Kb stinks any way you look at it, but it’s going to happen.

    So where do you look? There’s server-side, but Nate’s point was 10% (depending on the site) of the performance lag is on the server, the rest from the pipe down to the client and on all those requests for the assets.

    I think the point folks are making is that with the extra 4 requests on (1) the browser and (2) server and (3) over the pipe, what can you control? You can start to control the way and order in which pages and assets are structured, which impacts the latency on all those requests, and it adds up.

    But dissect that. The single request has one delayed response before it’s spit back to the client, one delay before it’s decompressed in the client, one delay before the page reflows and displays it, and on a hardware configuration you can’t begin to guess about.

    Additionally, Nate mentions browsers “download only two or four components in parallel per hostname”, so what happens to your 5 objects?

    Again, not an absolute calculation, but multiply the above by five and I guess you start to see the conditions they’re talking about.

    Finally on a lot of sites, it’s just not going to matter, but the richer the user experience typically the more files, media, and scripts are being sent down the pipe.

    It can add up, and there’s a philosophy with optimization called “every byte counts”. Also, there’s that 80/20 rule he talks about.

  8. rob    Feb 3, 12:21 PM    #

    Just a quick update with some other performance related links:

    First, Using CNames to get around browser connection limits

    Then, a cool article on Making your pages load faster by combining and compression javascript and css files...

    Third, their script which does the work for you… combine.php.

    Fourth, several, rather old actually, articles over at fiftyfoureleven.com on gzipping CSS. (one, two)

Commenting is closed for this article.

In This Section