There comes a time in every SEO agency’s life when it has to reexamine its audit template, to give the audit an audit, if you will. That time ideally comes every 18 months or so, though some agencies go years upon years without updating their audit templates, owing to institutional stagnation or simply to the fast pace of work typical of SEO. In UpBuild’s case, it wasn’t quite that long a wait, but it was a bit longer than 18 months, and boyyyyyy are we glad we set aside time to do it when we did.
This is not to suggest that we spent the last year and a half producing subpar SEO audits full of outdated recommendations. On the contrary, our audits during that time were good and current… but only because our team had stopped working from the template altogether, instead starting out each fresh one by copying the most recently completed audit for a different client, wiping out the findings, adding other slides as needed from other completed audits or subtracting them, and writing new ones whenever major landscape changes dictated. As such, within a few years the poor thing was breathtakingly devoid of standards, devoid even of a central point of reference what with having five or six different authors. So when our newest employees arrived a few months ago and were soon to be writing SEO audits of their own, we knew it was finally time to give the template the TLC it was due.
Page Quality
In conjunction with my co-worker Alex, and with no small amount of help from Mike and Ruth, I finally set about leading the audit template overhaul in late 2018, and we’re now feeling better about our SEO audits than ever. I thought I’d provide a brief on the biggest changes for the pleasure of you fellow SEOs out there who think about these things. Let me know in the comments if there are any points that you think merit further discussion!
I’ll lead with our largest and most ambitious change: the creation of a new Page Quality section in which we temporarily assume the role of a Google Quality Rater and assess sites according to our best interpretation of their Quality Rater Guidelines. We asked around a little but couldn’t drum up an example (an admitted one, anyway) of another SEO agency whose audit included a simulated PQ rating. But since this rating system comes right out of Google’s mouth, and delineates the basic quality standard that they demand a page meet in order to earn the right to be served in any searches at all, it seemed like a sensible new addition. Moreover, the 2018 edition of the guidelines was by consensus stricter and more intense than ever (chiefly but not exclusively for how it escalated Google’s fight against fake news and disinformation), which gave the impression that they might well be not only expanding their standards of Page Quality, but enforcing them more strictly in their ranking algorithm. We agreed unanimously to build in a section that would subject our client’s site to the exact kind of scrutiny that a Google Quality Rater would.
Like most SEO audits, ours awards numerical scores, which means that any interpretation of SEO literature (such as the above Guidelines) must necessarily be a quantification. As shown in the links shared above, the fundamental points of evaluation are largely measured on five-point scales (e.g. Highest, High, Medium, Low, and Lowest Quality), making it easy to configure the scoring, at least in that one dimension. And although the work of configuring the second dimension — which entailed defining individual criteria and deciding how to weight them — definitely required more creative judgment than the first, it came naturally enough; we chose to assign one score to Main Content Quality, another to E-A-T, and so on. But there was a third dimension to consider, and this was the one that required introducing a completely new kind of arithmetic to our scoresheet. I’m talking about the principle of the YMYL site.
Your Money or Your Life Sites
As we have written about in the past, Google’s designated “Your Money or Your Life (YMYL)” sites are the ones that directly engage the user’s finances or major life choices, and are thus empowered with the potential to damage the well-being of the user if they are unscrupulous or careless. These include sites that sell a product or subscription and take credit card numbers in order to do it, along with sites that deal in medical, legal, or financial advice. Google has made it clear in each edition of the Quality Rater Guidelines that PQ standards are higher on YMYL sites because the stakes of visiting those sites are higher, which suggests that PQ factors more strongly influence ranking on such sites than on others. This issue compelled us to put on our thinking caps. We could certainly add a line to the audit scoresheet asking the auditor to determine whether the site was or was not a YMYL site. But how would we quantify the auditor’s answer? Nothing additive could work here; the fact of being YMYL or not would not be a cause to add or subtract points. It would need to be translated into a multiplier.
One of the features of our audit’s summary score from the beginning was this dimension of Importance that we assign to each scoring element. We created a simple three-tiered Importance rating — High, Medium, Low — applied one of those values to each scoring element according to our best understanding, and then calculated the summary score for each major section, and for the audit as a whole, by weighting the elements according to that Importance rating. Here’s an insert from our explanatory slide at the start of each audit showing how this looks.
It went without saying from the start that Page Quality would be given the High Importance rating, but this we felt should hold even for non-YMYL sites. The new thing that was needed was not merely the “Is the site a YMYL site” question, but a reformulation of the scoresheet so that if the answer was No, the points for the section would be worth as many as any other High Importance factor, but if it was Yes, the points would increase in weight to something new and singularly high. I made a judgment call as to how large the increase should be, added the question, added a few new IF statements to the formulas in the scoresheet, and it was done. This was far from groundbreaking math, but it was fun to add a new dimension — the first of its kind — to our scoresheet that would vary the weight that a section was worth according to a separate but related point of evaluation. I still think it’s the neatest addition we made.
Web Content Accessibility Guidelines Compliance
It was in a similar spirit that we introduced the audit’s other completely new, large, multi-point section: a measure of basic user accessibility, by reference to W3C’s Web Content Accessibility Guidelines. These were first established in 1999 under the Section 508 Amendment to the Rehabilitation Act of 1973 and most recently updated in 2008, and they prescribe basic standards that a website should meet in order to be usable by as many people as possible. These widely recognized standards exist chiefly so that people with disabilities can still get the same use out of a website as able-bodied people, and also (most urgently) so that the site does not inadvertently trigger a medical event, e.g. a seizure for an epileptic user.
We had long stressed the importance of “accessibility” in our audit — it is in fact the name of one of the document’s five major sections — but had defined the term in the typically blinkered SEO way: as search engine accessibility. The section had evaluated the health of the site’s robots.txt file, XML sitemaps, navigation links, and other classic ranking factors of that ilk, but had tragically failed to consider the user. We were first inspired to create this section by a client of ours who brought to our attention a concern they had about their site in this regard at the very beginning of our engagement, so good on this client of ours for their self-awareness, and lucky for us that we got the exposure that we did to this whole area of evaluation before revising our audit, because these things simply matter, for the good of humanity. Even if Google did not appear to be increasing the emphasis it places on usability as a ranking factor (as evaluated within their ever-growing collection of Chrome user data), we still would have created this section as a document of the site’s basic decency. I am as proud of this addition as I am of the PQ section.
As a bonus, the addition required even less quantification magic than the PQ section did, as the WCAG themselves are written as discrete points of evaluation, each of which can be translated with relative ease into a concrete Yes or No question for the auditor. Look how the ToC is broken down:
Screen Reader Compatibility
This ease comes with one exception, though. The final point of the WCAG — 4.1: Maximize compatibility with current and future user agents, including assistive technologies — refers to screen reader compatibility, and though we are content with the WAVE Chrome plugin for help with assessing individual pages on this point, we can’t help but wonder if there is a tool out there with a cleaner reporting interface and/or more broad capability (e.g. the power to scan a whole site). Has anyone tried their Dinolytics product, which promises these things? Or a satisfactory alternative? Let us know in the comments.
Mobile-First Considerations
Longtime readers of this blog will remember a post I wrote one year ago lamenting the disconnect between the mobile-first world of real search behavior (and algorithmic focus) vs. the desktop-first world of day-to-day SEO work, including the crafting of site audits. This revision of our audit template created a golden opportunity to apply the mobile-first premise to the audit process in a realer and more careful way than ever before, and though we may not have cracked the matrix entirely, we’re a lot better off than we were before. Here’s a rundown of the new ways in which we were able to apply the mobile-first perspective.
Page Speed
We got a lot of help from Google themselves on this one, as the Lighthouse project delivered a marked improvement to their PageSpeed Insights tool once it took over the engine. But the improvements we saw in the tangibility and transparency of the metrics (e.g. First Meaningful Paint, Time to Interactive, etc.) were only half the story; the other half was that the tool quietly became mobile-first, with the Desktop numbers — in a perfect inversion of the old tool’s UI — only available for those who find and click the button near the top left:
The message embedded in this UI is not merely that the mobile numbers matter more than the desktop numbers, but that the mobile numbers should be considered the official numbers, with the desktop numbers to be treated as a footnote. Accordingly, we changed up our audit’s page speed evaluation so that it’s mobile-only. Even if the client is B2B and its traffic leans heavily toward desktop, we don’t deviate from this mobile-only model, because we know that a mobile user lost because of slow page load is still a user lost, and because on a responsive site, any improvements made to mobile page speed will only lift desktop as well.
Mobile Click-Through Rate
We had a section on organic CTR in the audit from the beginning, as a way to establish a baseline of ranking potential, meaning we would always look at the site’s overall CTR in Google Search Console to see where it was relative to the magic 5% mark, as this is roughly where the Page One “fold” falls in the Google organic CTR vs. position curve. In other words, there is no world in which it makes sense to tell your client “your CTR is bad; you should do something about that” without any more substantive recommendations, and of course the audit’s preceding sections on metadata, internal links, page copy, and usability already prescribed all the solutions that could be dreamed up to improve CTR. But historic CTR is a good measure to take early in a client relationship to see how steep the fight to increase organic traffic is going to be, and the audit is as reasonable a place as any to take it.
So, given the wild differences that have arisen between mobile and desktop SERP experiences over the last few years, we decided as part of this revision to start recording mobile organic CTR separately, in addition to overall organic CTR. In this way, we measure everything, but deliberately force the final score to tilt toward mobile, in keeping with today’s traffic pattern norms.
Navigation
This is one of those true hands-on mobile UX additions I was talking about wanting to make in last year’s blog post. We introduced a new factor to our navigation evaluation that asks the auditor to load the mobile site and accurately rate the lived, firsthand (pun intended) experience of the nav. Instead of measuring the tap targets with a tool and rating them according to how they compare to a web standard, we’re having the auditor actually use them and rate their usability that way, qualitatively. And while they’re at it, they can assess whether the mobile nav is easy to find in the first place, easy to pull up, easy to move between dropdowns on, and easy to move around in vertically without stretching the fingers too far, which are matters that an objective tool is really going to do an inadequate job of scoring.
AMP
We didn’t have an AMP section before! It’s a widely enough adopted thing by now that it seems fair to rate any site with content-rich pages on whether they’re using it. We set it up to assess two points: 1) whether they’re using it at all, and 2) the experiential factor: whether it looks and feels like the proper mobile site page, as opposed to looking and feeling like a thin, quasi-e-reader scrape of the page. Basically we feel that every site should be invested in AMP and can’t think of any excuse not to be.
Without getting any deeper into the secret blend of herbs and spices, this is the state of UpBuild’s SEO audit in 2019. What do you think? Is there anything else you’d like to know about? Anything you think we should be doing differently? You know how to reach us. Let’s rap!