The Velocity 2008 Conference Experience – Part VI
After a tasty pseudo-Asian hotel lunch (though about anything would be tasty by now!), we move into the final stretch of afternoon sessions for Velocity. Everyone seems in a good mood after the interesting demos in the morning and the general success of the conference.
First, it’s the eagerly awaited Even Faster Web Sites. Steve Souders, previously Yahoo! performance guru and now Google performance guru, has another set of recommendations regarding Web performance. His previous book with its 14 rules and the Firebug plugin, YSlow, that supported it, are one of the things that really got us hooked deeply into the Web performance space.
First, he reviews why front end performance is so important. In the steady state, 80-90% of your average page’s load time the user sees is time after the server has spit it out. “Network time.” Optimizing your code speed is therefore a smaller area of improvement than optimizing the front end. And it can be improved, often in simple ways.
Man, there’s a wide variance in how people’s pages perform with a primed cache – from no benefit (most of the Alexa top 10) to incredible benefit (Google and MS live Search results pages). Anyway, Steve developed his original 14 best practices for optimizing front end performance, and then built YSlow to measure them.
And now, he’s starting a second book! There’s ten more best practices. The slides are on his blog if you want the full version.
- Split the initial payload
- Load scripts without blocking
- Don’t scatter scripts
- Split dominant content domains
- Make static content cookie-free
- Reduce cookie weight
- Minify CSS
- Optimize images
- Use iframes sparingly
- To www or not to www
Neat tool he wrote – cuzillion, a page simulation tool.
“Splitting the payload” means only initially loading the JavaScript needed for page load, and lazy-loading everything else. On average, people only execute about 25% of their JS functions before the onLoad() fires. Tools to do this separation are needed (Microsoft’s Doloto does but it’s not public).
MSN.com figured out how to do parallel script downloads, a pretty simple but clever method using document.write, and showed it could be done. There’s a mess of options – XHR eval, XHR injection, script in iframe, script DOM element, script defer, and document.write Script tag. Each approach has a bunch of gotchas. Some only work in IE, some require JS changes. Some trigger the various “busy” indicators (throbbers, statusbars) in the browsers. Then you need to figure out if you need guaranteed execution order or if you want “fastest” (execute as they get there). The “winner” is actually a decision tree, but boils down to Script DOM element or XHR injection. Kinda.
Inline scripts also have issues – they block rendering and downloads as well. There’s some mitigations. Also, IE doesn’t block on parallel CSS downloads unless they follow an inline script. So don’t “scatter” scripts through your page, they screw things up.
So yay, front end performance… Me and my guys are starting to worry, however, about whether people are starting to ignore back end performance. Seems like a lot of these changes are getting pretty ticky and though our “best case” performance is mostly front end most of the performance *problems* are back end.
Having sweated my ass off for long enough, I displace to the larger ballroom for “Performance Plumbing” with Adam Bechtel from Yahoo!. He starts off talking about the natural progress of a growing site, from one server to a mess of them to two data centers to widely geo-distributed centers, where you start to run your own networks between them. And connect your network up with partners. Suddenly you have a global backbone. And you develop a bunch of routing policies.
You deploy “anycast” services. And scale them. This is dry. Very dry. How do you monitor your anycast service? Make caches. Proxies. Aside from humorous plumbing pictures I’m a little unclear what I’m supposed to be getting out of this talk. Luckily, it’s over. Robert leans over to me and says, “I’m not sure what to say about that.”
Back to the hot room for Stefan Stoyanov of Yahoo! about image size reduction. Seven tips. Again, can barely hear him (bad sound in this room + mumbling).
- Use png not gif.
- Run pngcrush.
- Remove JPEG metadata with jpgtran.
- Use palette not truecolor PNGs.
- There’s an AlphaImageLoader CSS filter to help with the truecolor PNG alpha transparency problem in earlier IEs. Try not to use filters, use PNG8 or “the underscore hack _filter” (no, I don’t know what this means either).
- Don’t serve generated images as-is. Write generated images to disk and crush them. He compares some images form normal to crushed to PNG8, and I sure can’t tell the difference, and you get like 50% savings at PNG8.
- Use CSS sprites and optimize them (all horizontal ideally) and close together. Stay within 256 colors and use PNG8.
Bonus:
- Don’t serve images from your app server.
You should be able to save 10-30% on your image sizes. Man that went fast.
Next, shared dictionary compression over HTTP with Wei-Hsin Lee from Google. Called SDCH, or “Sandwich.” Compression is nice, but if you could not retransmit identical pieces of data (like WAN accelerators do at the network level), you’re cooking with gas. Good concept, I’ve always thought that ESI markup of pages was too manual and “my damn computer should figure that out by itself.”
RFC 3229 for delta compression in HTTP – judged no good. Too many states to track and too limited scope (same URL only). Dame da!!!
So Google has introduced SDCH. It’s an HTTP addition with a small set of states shared between client and server. It’s domain scoped, like a cookie. It uses vcdiff (RFC 3284) plus enhancements, and it defines the server/client interaction wrt capabilities and dictionary.
Basically sdch is listed in the HTTP Accept-Encoding: variable. The server then sends a Get-Dictionary: response with the page, which the client lazy-loads. Then the client says what dictionaries it has (again, much like cookies) and the server responds with a sdch encoded response.
It complements gzip and deflate and generates a 40% improvement on google search results pages! But woirking on how to get it to the users. Spec and encoders and decoders are coming soon.
But it’s unclear to me how this is better than ESI – he says that generating the dicts is site-specific, aka manual. I asked him during the question time but didn’t get the question across to him I fear.
Building faster pages in Firefox and IE! Mike Connor (FF) and Lawrence/Stockwell (MS).
In Firefox 3 we used: dtrace (Sun), Shark (Apple), and talos (perf framework). Parallel conns up from 2 to 6 and stylesheet loading no longer blocks other downloads. (Seems like many of the Yahoo! High Performance Rules may be largely mitigated by browser fixes over time).
When working on perf – test. Test before and after. Measure continuously. Perf test after every checkin because cost to fix spikes later.
Easy steps: Use the Yahoo! guidelines (oo, he noted they’re trying to make them not matter any more, I’m so smart), incremental layout, and avoid “quirks mode.”
Hard steps: Use FF DOM Storage implementation. Optimize images with OptiPNG or pngcrush. Trim page markup site-wide.
Use tools. Firebug/YSlow, Jiffy. Shark on OSX. jQuery profiling plugin. [E: Oh good, that might help us with the jQuery-running-rampant I just discovered on our home page.]
Measure. Mozilla uses Talos to monitor perf against a static pageset. Can be set to use FF vs a live pageset. Regress!
In FF 3.1 looking to make JSON native and faster DOM implementation, and making Firebug rock more.
On to IE. IE8: We’re Fixing It! The first part was retread of the IE talk from yesterday. Some interesting things – XDomainRequest for cross-domain communication (really?) and adding a timeout to XMLHTTPRequest.
Now they’re going to show off Performance Analyzer, which is built on top of ETW. Now they show us an internal plugin (DTW) that “wont’ be released externally.” Then why do I give a shit? I mean, come on.
They make a point that the various browser toolbars have ill effects. Add-ons of various sorts have the same problems. He has some tips for making well behaved add-ons, like use a separate thread.
Not a word of this had to do with building faster sites for IE. Boo. I was happy with the microsofties up till now, participation in the previous panels/sessions was good and on point, but this is a step back.
Finally a break. Afterwards, the glorious conclusion!
Leave a Reply