Google’s 2018 Quality Rating Guidelines: The Search Giant Plants a Flag in Quicksand

It’s been almost three years now since Google went full disclosure with their Search Quality Rating Guidelines. While they don’t keep to a rigid update schedule, they have revised the guidelines a few times since that first 2015 publication, and the most recent edition arrived on July 20 of this year.

Beginning with the previous update, which came in March 2017, a number of new revisions with a nakedly sociopolitical focus arrived, seeming to reveal an understanding on the search giant’s part of their role in the feverish and incalculably damaging spread of disinformation on the Internet in this decade, and a desire to help beat back the current. With this newest edition of the guidelines, they have intensified this focus, sharpening the language to nullify ambiguities, condemning sites that peddle “fake news” and straight-faced conspiracy theories as objectively poisonous, and defining this battle with considerably clearer and more straightforward language than one could find in the 2017 edition.

One could have a field day speculating as to Google’s motivation for these changes:

  • Maybe they turned up the heat on disinformation because the feedback they got from the round(s) of Quality Ratings following last year’s guidelines suggested they hadn’t been adequately clear or aggressive in their language.
  • Maybe they have browsing data suggesting that user engagement with fake news — reading and sharing patterns, evidence of people being “sucked in” and radicalized, etc. — has gotten even worse in the intervening fifteen months.
  • Or maybe this is pure image anxiety, and they’re simply trotting out a PR campaign to help them salvage themselves from the flaming ball of wreckage that tech companies have made of their reputations by profiteering off this social disease from under a cloak of Goldilocks “all political opinions are equally valid” neutrality.

We can safely assume that last one is in the picture, even if there are also real ideals mixed up with their basic need to turn a profit. But motivation notwithstanding, there is a darker question looming over this new effort: namely, whether it will have any effect in diminishing the spread of disinformation at all.

What Are the Search Quality Rating Guidelines?

Google search results are continually evaluated for quality by a team of human Quality Raters who are asked to score the pages served for a given query on a five-point scale of quality: Highest, High, Medium, Low, Lowest. Like the big “core” algorithm updates, updates to the Quality Ratings roll out about once a year, and amount to a reality check which holds the newest version of Google Search up to human scrutiny to determine whether it is returning the kinds of high-quality results that the search team engineers intended it to. While Quality Rater evaluations do not typically inspire direct tweaks to an algo update or have direct influence on a given site’s rankings, and while the SERPs that Quality Raters review amount to a minuscule fraction of the total SERPs generated in reality, the Quality Rating project plays a vital role in sculpting the algorithm’s ever-refining understanding of what distinguishes a good and useful search result from a bad one. For as long we deny our full trust to a machine-learning-driven algorithm in making that distinction, this kind of regular human review is vital to maximizing the utility of the search engine to users.

To keep these Quality Raters operating from a shared set of values, Google refers them to a guidebook describing factors of Page Quality that they are expected to consider in their assessments. In explicitly defining a “quality” result, the guidebook supplies precious insight into the intentions that the Google Search team has for its flagship product. Moreover, revisions to the guidebook indicate — arguably more strongly than any other text anywhere — how those intentions have changed most recently. In other words, a new update to the Google Search Quality Rating Guidelines provides the clearest window available into what the Google Search team has been thinking about most recently, i.e. about what the people behind the algorithm believe to be its most significant shortcomings at present, and the specific hopes they have for how the those shortcomings might be addressed to the greater benefit of search results everywhere.

For as complete an unpacking of the July 2018 revisions as you’re going to find anywhere, please read Jennifer Slegg’s full review on The SEM Post, from which this post took much of its direction. And for a general introduction to the Google Search Quality Rating Guidelines as a public property, written shortly after they were first made such in 2015, look no further than this excellent overview written by UpBuild’s own Ruth Burr Reedy.

What Are the Basic Elements of Page Quality as Defined by Google?

Without unpacking the guidelines’ traditional standards in full (again, refer to Ruth’s post for that), I will note here that from the beginning, the defining question that Quality Raters have been instructed to ask of each page as primary indicator of Page Quality was “does the page achieve its purpose?” In other words, does it succeed in leaving no gap between what it should deliver — what it appears to promise to deliver — and what it actually delivers?

In brief, the more granular points that Quality Raters have been asked to consider on a given page have typically included the following:

  • Does the page focus on strong Main Content (MC)?
    • A high- or highest-quality page would make its largest investment in well-written and useful MC of satisfying length, and would put the MC front and center on the page, leaving no doubt that it the true point of the page was to read it. A low- or lowest-quality page would present MC of unsatisfying length or slapdash quality, and/or would obscure the MC either with ads or with distracting noise marking a poor user experience. In the worst cases, the page would fail altogether to deliver content of substance, instead suckering the user in for aggressive sales pitch, or, worse, a bombardment of ads or malware.
  • Does the page demonstrate strong levels of E-A-T (Expertise, Authority, and Trustworthiness)?
    • A high- or highest-quality page would be credited to an author regarded as a qualified expert on the topic (whether by dint of credentials or experience), would belong to a domain regarded as a voice of authority on the topic, and would transmit signals of trustworthiness such as a polished look and feel, cited sources, visible evidence of user engagement, and (especially in the case of Your Money or Your Life pages) a secure backend obviously capable of protecting private user information. A low- or lowest-quality page would lack these markers of E-A-T.
  • Does the page belong to a website with a good reputation?
    • A high- or highest-quality page would belong to a site with a demonstrated history of good writing on its chosen subject, or of serving customers well with its product or service, and would be transparent and objective on the matter. A low- or lowest-quality page would belong to a site with a reputation that was blemished, non-existent, or difficult to investigate on the web. Quality Raters have always been asked to do external research on the site as a whole, beyond the single page that they are evaluating, in the interest of assessing reputation from as complete a foundation of information as possible.

“Needs Met”

In addition to these instructions for assessing Page Quality, Google also asks Quality Raters to submit a separate evaluation termed “Needs Met” to assess whether or not a page would meet the needs of someone searching for the specific query entered. “Needs Met” is also scored on a five-point scale: Fully Meets, Highly Meets, Moderately Meets, Slightly Meets, and Fails to Meet.

While not centrally germane to this post, this discrete assessment is important because it distinguishes the matter of a page’s innate quality from the matter of its relevance to a specific query. We have all had the experience of being served a page in search that commanded our trust, and perhaps even our interest, but that was not what we were looking for when we input the query that we did. Quality and relevance are indeed two separate questions and they deserve two separate axes of evaluation.

These basic points are the evergreen stuff that will likely comprise the foundation of Quality Ratings forever. The most notable point of contrast between the original set of public guidelines — which stopped there — and the newest anti-disinformation editions is that the traditional guidelines make no statement about the core thesis of a piece of content being indicative of Low quality, except by somewhat oblique reference to E-A-T and website reputation. This is what has changed.

What Has Changed

Let’s begin with the last Search Quality Guidelines revision, dated March 2017, which was the first to announce an effort to combat “fake news”. At that time, the revisions oriented toward this effort amounted to:

  • Classifying news articles as YMYL (Your Money or Your Life) pages for the first time, in hopes that Quality Raters would regard them more seriously.
    • Previously, the only pages given the YMYL stamp were pages that offered information or advice related to health, medicine, or finance, or that handled the personal information of users.
  • Adding the following to their list of Low-quality page indicators:
    • “Including inaccurate information, such as making things up, stretching the truth, or creating a false sense of doubt about well established facts.”
    • “Failing to cite sources, or making up sources where none exist.”
  • Adding this supplementary sentence: “Inaccurate or misleading information presented as fact is also a reason for Low or even Lowest quality ratings.”
  • Adding the following categorical example to their section on “deceptive pages or websites”:
    • “Pages or websites which appear to be deliberate attempts to misinform or deceive users by presenting factually inaccurate content (e.g., fake product reviews, demonstrably inaccurate news, etc.).”

    And the following more specific examples to the same section:

    • “A webpage or website that impersonates a different site (e.g., copied logo or branding of an unaffiliated site, URL that mimics another site’s name, etc.).”
    • “A webpage or website looks like a news source or information page, but in fact has articles with factually inaccurate information to manipulate users in order to benefit a person, business, government, or other organization politically, monetarily, or otherwise.”
    • “A nonsatirical webpage or website presents unsubstantiated conspiracy theories or hoaxes as if the information were factual.”
    • And a real exhibit of a self-styled news page reporting the death of Miley Cyrus, a false report given with no cited sources, no credited author, no date, an no information anywhere about the “news” organization behind the website.
  • Adding the following indicator of a page that should be rated “Fails to Meet” on the “Needs Met” scale: “Pages which directly contradict well established historical facts (e.g., unsubstantiated conspiracy theories), unless the query clearly indicates the user is seeking an alternative viewpoint.” [Be sure to remember that final clause.]

These were the new additions a year and a half ago, likely composed in reaction to the cold-water shock of both Brexit and the 2016 US presidential election, and the trickle of revelations over the subsequent months that revealed just how deep the rot of lies on the Internet — and the danger presented by people who believe them — went.

The 2018 Double Down

Now, let’s look at what changed between that edition and the newest one. The new revisions pertaining to dis- and misinformation are as follows, presented in the order in which they appear in the text. Just wait until you see how much further they’ve gone:

  • This enormous change to the Page Quality rating process: before a Quality Rater even considers whether the page “achieved its purpose”, they are first asked to verify that the purpose is a “beneficial” or “helpful” one. Here is the text:
    • “Most pages are created to be helpful for users, thus having a beneficial purpose. […] Websites and pages should be created to help users. Websites and pages that are created with intent to harm users, deceive users, or make money with no attempt to help users, should receive the Lowest PQ rating.”

    So it is no longer enough for a page simply to do what it sets out to do. Now, Quality Raters have to be convinced from the start that the thing the page is setting out to do is something that would be good for somebody. Granted, that “something” doesn’t have to be something outright charitable; their examples include “to entertain” and “to sell products or services”. Still, this call to verify beneficial purpose was absent from all previous versions of the text, but is restated numerous times in the new edition, most often at the top of every reiterated explication of Page Quality (see Jen Slegg’s post for every occurrence). Nothing in the previous edition drew this kind of line in the sand against pages intended to harm or mislead, and disinformation pages most certainly stand alongside pages pitching scams and pages firing off malware to qualify as such.

  • The addition of a call to research and evaluate not just the reputation of the website hosting the page, but the author or creator of the content. Here is the relevant text:
    • “You may need to identify the creator of the content, if it is different from that of the overall website. […] For content creators, try searching for their name or alias. […] For content creators, look for biographical data and other sources that are not written by the individual.”

    This looks to be even more narrowly focused on disinformation “bubbles” than the text about “beneficial purpose” was. Sources of disinformation often thrive in the public square by referencing each other in a phenomenon called “circular reporting” which this instruction seems to be attempting to rupture. Like the references to “beneficial purpose”, these references to the importance of the reputations of content creators are repeated numerous times throughout the document.

  • A rewritten explanation of E-A-T insisting that E-A-T be demonstrated not just by the content or the hosting site, but by the content creator. They have even revised the definitions for each of the three letters in the acronym to make explicit reference to creators. Like the above bullet, this is an effort to reward qualified voices in search and bury unqualified ones, regardless of the connection between an author and a page. This section goes on to flesh out expectations of a quality informational page in a variety of areas of study; I will replicate Ms. Slegg’s habit of italicizing the language new to this year’s text in reproducing the statements made in the text about news and science pages:
    • “High E-A-T news articles should be produced with journalistic professionalism — they should contain factually accurate content presented in a way that helps users achieve a better understanding of events. High E-A-T news sources typically have published established editorial policies and robust review processes.”
    • “High E-A-T information pages on scientific topics should be produced by people or organizations with appropriate scientific expertise and represent well-established scientific consensus on issues where such consensus exists.”

    This new text demands, with far greater specificity and seriousness than before, that Quality Raters seek out the credentials of the pages and authors that they evaluate as proof of their E-A-T, and downvote cases where these credentials are absent or difficult to find.

  • Augmentation of the list of characteristics of a low-quality page to include the following new ones:
    • “The title of the MC [Main Content] is exaggerated or shocking.”
      “There is an unsatisfying amount of website information or information about the creator of the MC for the purpose of the page (no good reason for anonymity).”
    • “A mildly negative reputation for a website or creator of the MC, based on extensive reputation research.”

    The second and third additions there reinforce points made earlier, with a new emphatic request to make it “extensive” reputation research (this will be restated elsewhere in the document). The first addition above is as explicit an anti-disinformation attack as the text has shown thus far. Then there is this final note in the same section, also new:

    • “If a page has multiple Low quality attributes, a rating lower than Low may be appropriate.”

    In this context, the most generous interpretation of “multiple” is “more than two”. By that definition, a page demonstrating all three of these new factors — which would be true of nearly any disinformation page — would deserve to be rated Lowest, regardless of how well it satisfied expectations of user experience or any of the evergreen standards. That is damning language.

  • An example of clickbait as a marker of Low-quality MC in a later section about how to rate MC quality. The text reads:
    • “Exaggerated or shocking titles can entice users to click on pages in search results. If pages do not live up to the exaggerated or shocking title or images, the experience leaves users feeling surprised and confused. Here is an example of a page with an exaggerated and shocking title: ‘Is the World about to End? Mysterious Sightings of 25ft Sea Serpents Prompt Panic!’ as the title for an article about the unidentified remains of one small dead fish on a beach. Pages with exaggerated or shocking titles that do not describe the MC well should be rated Low.”

    Not all clickbait is not outright disinformation — if the article tells the truth after enticing your click under false pretenses, the article itself may not be disinformation — but the two are closely related phenomena with substantial overlap. any utterly false, deceptive articles — especially those engineered to go viral within social media echo chambers — make a point of crafting “exaggerated or shocking titles” to that end. It is easy to see Google’s inspiration to encourage downvoting of clickbait here as motivated by the same anti-disinformation sentiment.

  • A strengthening and explication of their call to conduct research into the reputations of content creators in their description of “mixed or mildly negative reputations”. This text reads:
    • If the MC was not created by the website, research the reputation of the creator of the MC. While many ordinary people do not have reputation information available on the Internet, you can find reputation information on well-known YouTubers, journalists, authors, bloggers and vloggers, professionals such as lawyers and doctors, etc.

      “Pay attention when there is evidence of mixed or mildly negative—though not malicious or financially fraudulent—reputation. The Low rating should be used if the website or the creator of the MC has a mildly negative reputation.

      “Important: For a YMYL website, a mixed reputation is cause for a Low rating.”

    For the purposes of this blog post, the big takeaway from this new text is “just because somebody has a popular blog or YouTube channel, doesn’t mean they have any idea what they’re talking about.”

  • An expanded definition of a Lowest-quality page which extends to all disinformation and “fake news” pages, as well as pages that promote hate. Here is this section’s new key text:

    <

    ul>

  • “Websites or pages without a beneficial purpose, including pages that are created with no attempt to help users, or pages that potentially spread hate, cause harm, or misinform or deceive users, should receive the Lowest rating. E-A-T and other page quality characteristics do not play a role for these pages. For example, any page attempting to scam users should receive the Lowest rating, whether the scam is created by an expert or not.”
  • The statement that “E-A-T and other page quality characteristic do not play a role for these pages” makes for another major leap in the text’s intensity. Once again, this is language of a strength and clarity unlike anything in the previous editions.

  • A greatly expanded definition — and indictment — of pages that promote hate, most notably changed by mention of hateful content dressed up in attractive language. Since dis- and misinformation so often foster radicalization, and since radicalization can be so easily fostered by algorithms (even Google properties) in the course of trying to keep people engaged, these new attacks on hate content deserve inclusion under this umbrella. Here is the text, with the changes in this year’s edition italicized:
    • “Use the Lowest rating for pages that promote hate or violence against a group of people based on criteria including — but not limited to — race or ethnicity, religion, gender, nationality or citizenship, disability, age, sexual orientation, socio-economic status, political beliefs, veteran status, victims of atrocities, etc. Websites advocating hate or violence can cause real world harm.

      “Hate may be expressed in inflammatory, emotional, or hateful-sounding language, but may also be expressed in polite or even academic-sounding language.

      “Extensive reputation research is important for identifying websites that promote hate or violence. Please identify reputable and well-established organizations that provide information about hate groups in your locale when researching reputation. Some websites may not have reputation information available. In this case, please use your judgment based on the MC of the page and knowledge of your locale.”

  • Finally, and most potently, an entirely new section called Pages That Potentially Misinform Users. This is the big one, and I feel compelled to reproduce it in full, because it makes everything that has come so far seem mild and suggestive by comparison to the blunt directness of its language. This is where they really swing the bat.
    “The purpose of an informational page is to communicate accurate information. Assume an informational purpose for pages that look as though they are informational or pages that many users go to for information, even if it is not an official news source or an official encyclopedia article. This includes pages that appear to be news, social profile pages spreading news or information, forum discussions about informational topics such as current events, videos which cover news topics, etc.

    “The Lowest rating must be used for any of the following types of content on pages that could appear to be informational:

    • Demonstrably inaccurate content.
    • YMYL content that contradicts well-established expert consensus.
    • Debunked or unsubstantiated conspiracy theories.

    “Lowest should also be used under these circumstances:

    • The content creator may believe that the conspiracy theory or demonstrably inaccurate content is correct, or it is unclear whether they do.
    • The content creators may be deliberately attempting to misinform users.
    • The content creators describe, repeat or spread conspiracy theories or demonstrably inaccurate content without a clear effort to debunk or correct it, regardless of whether the creators believe it to be true. For example, content creators may produce this content in order to make money or gain attention.

    “Some examples of information that would be found on Lowest quality pages include: the moon landings were faked, carrots cure cancer, and the U.S. government is controlled by lizard people. While some of these topics may seem funny, there have been real world consequences from people believing these kinds of Internet conspiracy theories and misinformation.

    “Find high quality, trustworthy sources to check accuracy and the consensus of experts if you are unsure about a topic. Be especially careful with YMYL topics such as medical, scientific, financial, historical, or current events that are necessary for maintaining an informed citizenry.

    “Please research conspiracy theories. Fact-checking websites cannot keep up with the volume of conspiracy theories produced by the Internet. Some conspiracy theories are impossible to debunk because they claim all debunking information is inaccurate. If a claim or conspiracy theory seems wildly improbable and cannot be verified by independent trustworthy sources, consider it unsubstantiated.”

  • No further comment required.

    Whether this is a sincere show of heart or not, Google’s repeated assertions in this year’s guidelines that sites which spread disinformation or “fake news” are innately poor-quality and should be downranked by virtue of the things they say and the thoroughly ugly reputations of the people who say them constitutes the most emphatic pushback we’ve yet to see from them on this score. No longer are they countenancing the idea that these kinds of pages and sites could be “high-quality” results for, or capable of “meeting the needs” of, certain people in certain circumstances (whether like-minded people who really believe this stuff or people who might be entertained by the outlandish ridiculousness of the ideas). Their new position is that these kinds of pages and sites are objectively toxic, good for nothing and nobody, and should never be served on good-faith search queries pertaining to current events or history, full stop.

    Why This Won’t Work

    There’s just one problem, though. The spread of disinformation is not a centralized effort. There’s no gray building in some Russian monotown where all the suspect blogs, social media accounts, and YouTubers emanate from. Disinformation propagates because human users of the Internet believe and share it, and from what we know about the vetting process that Quality Raters are traditionally subjected to, some of the people who do believe and share it could easily end up among their ranks, doing the rating.

    It seems like Google knows of this possibility and feels this anxiety, or else why would their language have needed to take on so much more fire and brimstone just a year and three months after they published their last revision, which itself was so much stronger on this point than any that came before? The new language in this year’s edition is a language of urging. And there is, sadly, no reason to believe that urging the misinformed to stop being so misinformed is going to accomplish anything.

    Anyone who still supports, for example, InfoWars after almost six years of continual and remorseless Sandy Hook trutherism is likely much too far gone to be pulled back by something as dry as the guidelines given to them by Google for their new part-time gig assessing the quality of search results. Not only that, but anyone meeting that description who gets selected to be a Quality Rater will read these guidelines and most likely either troll their way through the rating process out of spite (since Google is just another faceless elite entity out to oppress them), or at the least rate InfoWars et al Highest, and The Washington Post and other legitimate journalistic outposts Lowest, out of sincere belief. Through the lens of someone already given over to alternative reality, all of the bullet points listed above are reversible.

    Yes, Google is making a valiant gesture by tethering facticity, perceived “beneficialness”, and the importance of a substantive content creator reputation to the more traditional markers of Page Quality and codifying the connection in this document, where it matters most. Also, frankly, their jeremiad about fake news and conspiracy theories in the new section noted in the last bullet is really encouraging stuff; Facebook, to name another tech mammoth, has only scratched the surface of policing the spread of dangerous lies on their platform, and Twitter has been unforgivably deaf and dumb on the subject. But the US, just to name the most obvious country, is dangerously divided on the matter of objective truth — the number of people who believe down is up is not in the thousands but the millions. Thus, the odds of the cross-section of Internet users chosen to be Google Quality Raters coming to include conspiracy theorists and other representatives of untruth are very good indeed, and will likely improve over time. So while the valiant gesture is something I appreciate, it’s almost impossible to be convinced that it’s actually going to help patch these cracks in the planet.

    Watch for the language to get even more forceful in next year’s edition. And then, watch for it to continue not to matter.

    Written by
    Driven by a deep fascination with the philosophical and cultural implications of search, Will has applied his skills to improve visibility, traffic, and conversions for hundreds of sites in his 10+ years in SEO and analytics.

    Related Posts

    Leave a Reply

    Your email address will not be published. Required fields are marked *