On June 15 came the news that the gradual rollout of Google’s Page Experience Update had finally begun. It’s expected to take until August to complete, but the beginnings are upon us, and when it’s done, it’s going to put page performance and usability closer to the forefront of SEO than they’ve ever been before.
UpBuild is responding with a major expansion of our own. We’ve taken the technical SEO deliverable that we originally called the Mobile Performance Deep Dive and transformed it into the Page Experience Deep Dive: the most comprehensive evaluation we could manage of the numerous dimensions of page experience that Google has begun to take into consideration. We’ve added a multitude of usability audits (from CRuX and Lighthouse), introduced the Core Web Vitals into the mix, and begun treating the mobile-first perspective as the universal given that it now is in SEO rather than emphasizing it. “Page Experience” is the new “Mobile-Friendliness”, and the Page Experience Update is the logical evolution of the mobile-first methodology updates of 2015–19, so it’s only fitting that our deliverable should evolve along the same lines.
Because page experience is an elusive and many-faced creature, and because it isn’t clear how much we are going to be able to learn about the way the algorithm assesses it over the next three months, I am going to share details from the composition of our new deliverable to start a conversation among SEOs about how we’re assessing it.
The way we source data for this document is by gathering a representative page sample from our client’s site (100 or so popular pages, taking care to span the full range of the site’s templates), and feeding them into a spreadsheet we’ve created that makes calls to the CRuX and PageSpeed Insights APIs, and pulls results for all 100 pages on the metrics that we’ve chosen to focus on. There are 145 audits in total that these tools will run on a page you submit, and we’ve chosen to build our document around 32 of them: the top-line Performance audits (the three Core Web Vitals plus First Contentful Paint, Speed Index, Total Blocking Time, and Time to Interactive), and then a bunch of the many granular audits that cover individual, concrete aspects of page experience, and which are the basis for calculating those top-line aggregate scores.
The granular audits are the focus of the document because they represent where the rubber meets the road; you can only improve your top-line page experience scores by improving your scores on these audits. We selected these 32 because they all satisfy the following criteria: they contain actual appraisals of the point in question and suggest improvements rather than just naming key elements; they’re universal, meaning they apply to all websites irrespective of tech stack; and they’re not already covered in full by our SEO Audit & Action Plan (such as “meta descriptions: present or absent”). Here’s the list:
- Avoiding Multiple Page Redirects
- Deferring Render-Blocking Resources
- Minimizing Main-Thread Work
- Removing Unused Resources
- Minifying Resources
- Avoiding document.write()
- Reducing Server Response Time
- Efficiently Encoding Images
- Serving Images in Next-Gen Formats
- Serving Properly Sized Images
- Serving Responsive Images
- Deferring Offscreen Images
- Using rel=”preload”
- Preloading Largest Contentful Paint Image
- Using rel=”preconnect”
- Serving Static Assets with Efficient Caching
- Using Passive Event Listeners
- Text Compression
- Setting the “Viewport” Meta Tag
- Sizing Content Correctly for the Viewport
- Allowing Browser Zoom
- Ensuring Color and Contrast Accessibility
- Displaying Text in Legible Font Sizes
- Sizing Tap Targets Appropriately
- Avoiding the Meta Refresh Tag
- Avoiding Requesting Notification Permission on Page Load
- Avoiding Requesting Geolocation Permission on Page Load
- Using HTTPS exclusively
- Redirecting HTTP URL requests to HTTPS
The sheet pulls each page’s score on each of these points and calculates an average and a median for the entire sheaf of pages tested. From there, what we do to produce a client-facing document — besides sharing those calculations and explaining what each audit is testing — is to pick the single page that looks to be the overall lowest-scoring representative of its template, run it back through the PageSpeed Insights UI, and enumerate all of the specific actions that the tool recommends taking with that page’s resources to raise each of its scores (e.g. minify this specific block of CSS, load this specific script asynchronously so as to reduce the burden on the main thread, make the mobile nav’s tap targets bigger by this many pixels, etc.). In summary, we begin with the widest allowable view of all the “page experiences” that it is possible to have on the site, and then leave the client with an itemized list of simple, well-defined actions that they can take to make each of those experiences the best that it can be, prioritized according to the size of the impact that we expect each one will make.
I’ll check in on this approach again in August and report any changes that we’ve decided to make, in response to whatever we may end up learning over the summer about the kinds of ranking judgments Google is actually making based on page experience. But I think this is a pretty good start. If you’re in SEO, are you doing anything similar? If your method is different, how is it different? Are there page experience considerations that we’re ignoring? Should we be weighting these factors in some way, rather than just addressing them all on a level plane? This is the conversation that I hope to start by sharing this early, so if you pipe up in the comments, you’ll only be doing me a favor. Happy summer!