This post originally appeared on the TES Engineering Blog.
We were given some dedicated time at TES to work on page load performance.
This post is about what we chose to measure and the tools we used (WebPagetest and SpeedCurve).
How are we defining 'page load speed'?
How quickly the user can see and interact with core page content after they navigate.
Non-core content could be adverts, their user avatar, or recommended links. It's important they appear as quickly as possible but they're not the main reason the user navigated to the page.
Where are the biggest gains to be made?
Steve Souders defined a rough split between back end and front end performance:
Each request has various stages:
The “backend” time is the time it takes the server to get the first byte back to the client.
The “frontend” time is everything else.
He surveyed page loads times for thousands of websites and found the majority of page load time was 'front end' (this was in 2012):
The biggest gains are to be made by focussing on front end performance.
Which tools can we use to measure?
WebPagetest is a free web-based service that does 'synthetic' metrics - ie it simulates different network conditions like speed and latency (as opposed to capturing metrics from your actual users in the field - 'Real User Metrics aka RUM).
You can enter the URL of your page and it will give you a full analysis of the page load speed including HTTP waterfall graph, filmstrip and video.
SpeedCurve is a paid product that runs on top of WebPagetest that lets you run page speed tests as often as you like and graphs the results to show trends.
They've also just added a RUM product called LUX.
How do we measure page load speed?
1. Visual rendering speed
is calculated as a number (lower === better) by WebPagetest. SpeedCurve shows it as an elapsed
Unfortunately, SpeedIndex doesn't seem to be an accurate indicator of when the core content becomes visible.
Because it's based on the proportion of pixels that
have painted and it doesn't necessarily wait for the text to be rendered to load, we were getting
SpeedIndex times lower than the point when the text became visible. The SpeedIndex for the test below was 5.87s but the text wasn't actually visible until 6.1s;
We've now improved this, as we're inlining the main font in the CSS (using this technique, so there's no lag between the main page render and the webfont downloading:
I'm still treating SpeedIndex with a bit of caution as it sometimes seems to give a different timing to when the page is visually rendered according to the filmstrips.
Now that, in most cases, the text renders at the same time as the main styles, due to the inlined font, Start render seems more accurate.
The bulletproof (although manual) method to check visual render time is to use the waterfall for each test in SpeedCurve, or the filmstrip in WebPagetest. In SpeedCurve if you move your mouse horizontally across the waterfall a filmstrip will appear below it and you can get an accurate indication of when visual render occurred.
The 'Visually complete' metric isn't really useful either as on a long page with lots of images for example a
news article, it would have a very high value even though the core above-the-fold content might
have loaded quickly, which is what really matters to the user.
2. Time to interactivity
After the visual render, there could be a short pause before the user can interact with it.
This is achieved by calling:
This uses the User timing API to add an entry to the window.performance marks array. You can view any that have been added with:
up (for a React app this might be the root component's componentDidMount
WebPagetest (and then SpeedCurve) picks up all window.performance marks automatically. SpeedCurve even shows them on its waterfall graphs, and we can track these metrics on the SpeedCurve dashboard and favourites.
This is the first in a series of posts about page performance. The next will be along in a week or so, stay tuned!
A video of me talking about the performance issues discussed in this post.