This document discusses front-end performance measurement. It recommends measuring performance at every stage of a project's lifecycle using both synthetic and real user monitoring tools. Key metrics to measure include time to first byte, speed index, user timings. Both types of tools provide valuable but different insights and should be used together. Performance data should be reported visually through dashboards to make it relevant and actionable. The goal is to establish a "culture of performance" and catch problems early.
9. In the browser
function myTimings()
{
performance.mark("startTask1");
doTask1(); // Some developer code
performance.mark("endTask1");
performance.mark("startTask2");
doTask2(); // Some developer code
performance.mark("endTask2");
}
http://www.w3.org/TR/user-timing/
12. • Response End / TTFB
How quickly has my server served the base page
• DOM Content Loaded
A good analogy for “Page is usable”
• Render Start / First Paint
Gives us an indication of when the user actually sees something
• Total Page Load
Although this includes all 3rd-party and deferred content, it can help get a “feel” for how well
everything is working
• User Timings
This is a little more work, but allows the ability to instrument the areas important to you
• Speed Index
This is a great single metric to give a pretty good idea of overall user experience
What?
16. Performance Budgets
• Defines tangible numbers or metrics
• May be defined by an aspiration or industry standards
• Enforces the performance standards
• Instills a “culture of performance” in the project team
• Gives a mark to measure by
• You probably already have one!
• Start vague, but define early
• “Performance is everyone’s problem”
24. • sitespeed.io
Uses WPT & PhantomJS to run performance audits on site.
• Can be used internally (CLI tool)
• PerfBar (http://wpotools.github.io/perfBar/)
Surfaces NavTiming data in the browser
• Useful on UAT-type environments
• CI plugins
• Test for performance as part of the CI process
Other Tools
28. How?
• Synthetic
External, controlled testing
• Real User Monitoring
Browser-based reporting of real user’s experience
• Don’t choose!
Both synthetic and RUM provide valuable insight into performance and should
be seen as complementary - either alone gives a narrow view
• Report
Display data on dashboards, make it visible and relevant
29. Summary
• What: Decide what metrics are relevant to User Experience
• When: At every stage of the lifecycle
• How: Using tools and reports to make the data relevant and actionable
Start with the what…?
What shall we measure?
more questions:
meaningful?
how our pages are performing?
user experience?
what do users *mean*?
The page is usable?
All objects are loaded?
When the browser wheel stops spinning?
Can’t answer
Can help to find out
Know what you can measure to ensure you are meeting you user’s expectations.
Let’s start with the basics…
request an object over HTTP
basic steps to deliver an object over HTTP,
measure all of these
indication of the page delivery performance
bundle the back-end metrics into TTFB
HTML page has been downloaded
how the rest of the page gets built and displayed
DOM = document object model
very simplistic model
partial render tree
render start may happen before DCL
elephant in the room: JavaScript!
blocks DOM construction
CSSOM construction blocks JavaScript execution!
maybe DCL is a useful metric…?
Once in the browser, there are APIs we can use to collect these, and other metrics…
The NavTiming API…
Lots of metrics covering navigation, page load
+ browser events like DCL
http://www.w3.org/TR/navigation-timing/
The ResourceTiming API…
Performance metrics for page objects / resources
NB Subject to CORS
Must have an allow header (Timing-Allow-Origin)
ultimate flexibility, the UserTimings API
own timing marks in JS
Guardian 1st party JS app instrumented
Measure of how quickly the visible portions are drawn
Visual completeness during page load
Index of how long spent Incomplete
Example:
Start and End at the same time
Graphing completeness over time gives…
Can see that A is more complete more quickly
B is Incomplete longer = worse UX
Index calc’d from area above
Larger area = larger index = worse UX
More detail on formula online
Used in synthetic
Can be calc’d from browser paint events
unreliable & not used commercially
So let’s return to the “What?”…
Huge number of metrics
What can we use to represent UX?
Depends
a starting point of what I use…
Response EndHow quickly has my server served the base page
DOM Content LoadedA good analogy for “Page is usable”
Render Start / First PaintGives us an indication of when the user actually sees something
Total Page LoadAlthough this includes all 3rd-party and deferred content, it can help get a “feel” for how well everything is working
User TimingsThis is a little more work, but allows the ability to instrument the areas important to you
Speed IndexThis is a great single metric to give a pretty good idea of overall user experience
let’s look at the “When?”
when to test
develop then test and hope?
Example of a waterfall methodology
when to measure performance?
without saying do it in test
and probably development.
What about requirements?
Performance should be a NFR
And monitoring performance after release
editors add content, marketing add tags
ensure that users are still getting the optimal experience.
what about during Design…?
Brad Frost tells us…
Good performance is good design
Many articles on designing for performance
book on it by Lara Hogan.
Designing to be fast from the beginning, rather than trying to optimise, will always give a better experience for end-users, saves time in development & test, and makes the life of a developer a heck of a lot easier!
A key way to achieve this is to set Performance Budgets;
dev & design collaborating on designing a fast site
PERFORMANCE IS EVERYONE’S PROBLEM
So… we come back to the question of When?
At every stage of the lifecycle
But how do we do that?
We know:
what to measure,
when to measure.
But how?
I’ll walk you through some of the options, using examples of tools on the way.
Synthetic, often referred to as “robots”.
many forms
simple curl-type requests measuring the HTTP req
also commonly used for availability monitoring
doesn’t tell us a lot about UX
easy, and often free or very cheap.
Better to test using “real” browsers -
emulated browser to load the page at
regularly - external (generally)
Methodologies vary
Test under (relatively) consistent conditions.
Emulated browsers for control
Graph-based portals waterfall charts
Also test from real VMs running desktop (or mobile in some cases) browsers.
Some will use these for regular testing, as well as for single tests.
Also screenshots filmstrips videos
Key is consistency - Stable - Bandwidth - Latency
Can’t compare otherwise
Webpagetest is a fantastic resource
it’s free,
test from real browsers all over the world.
Can build scripts to do things like authentication, click-paths.
API to run tests, and get results,
plenty of tools use this to automate measurement
build pipelines - other reporting suites (sitespeed.io)
Real mobile devices (Android and iOS) - US based.
Open source, available on github…
own private instance on your own network,
A few minutes on AWS (pre-built AMIs).
Great tool for testing before production as you can put it anywhere you need!
How our site is really performing for end users, we need to get metrics from them.
We know browsers provide a mechanism for getting data
how do we get the millions(?) of data points and make sense of them?
Start on the rum… no,“Real User Monitoring”
Typically, small JS tag
collect the metrics and beacon to a portal
Some analytics will also collect basic performance data Like GA,
Usually very basic and heavily sampled
But how do we make sense of all this data?
Eternal question for RUM data.
Portals allow you to analyse the avalanche of data
Often averages, percentiles or aggregations
Valuable, but takes work
Allows visibility into UX
Further investigation before conclusions can be drawn
Poor performance in Mexico could be
poor CDN performance in the region
A local connectivity issue
Even, a single data point from a user on dial-up ;)
HISTOGRAMS
Other Tools
Sitespeed - CLI - PhantomJS - WPT runner - internal
Perfbar in UAT or for internal users
CI plugins - fail the build on broken budgets
So what are we going to do with all this data we’re collecting?
Use to optimise the site
Find areas to improve,
RUM data might show situational optimisations
But what else can we do with it?
Speedcurve offers a number of high-level visualisations.
Example here shows number of images on homepage
Marked performance budget
Markers to show deployments.
Can be used to “publicise” site performance
I know teams that display around the offices
Make sure everyone knows what’s going on.
“performance is everyones problem”.
API access to the data,
Build custom dashboards
Graphite (with Grafana as a front-end) on the left,
and Splunk on the right.
Flexibility to integrate performance data
business needs,
other data sources like analytics,
combining synthetic and RUM data,
Build your own story
Display data in a way that’s meaningful to everyone.
So How…
I’ll leave you with this quote from a 2011 blog entry from Ian Malpass at Etsy… this is their philosophy.
However it’s important to remember to focus on what’s important to you, while collecting all the data you possibly can - you never know when it may be useful!
https://codeascraft.com/2011/02/15/measure-anything-measure-everything/