On this front, Google announced a comprehensive roll-out of quality improvements for its search results algorithm as a part of its Project Owl endeavour. The move aims to cut down on search and auto-complete results showing fake news (content that looks and feels like news but comes from questionable sources or contains overly biased reporting) or offensive content. Google’s approach is to build algorithmic solutions that better surface ‘authoritative’ content from trusted sources (such as Wikipedia) combined with feedback from its (human) search quality raters and Google users.
Inflammatory or fake content in search results is the unintended byproduct of how Google displays results and measures the validity or trustworthiness of content on the web.
While there have been several high-profile examples of misleading or offensive results in the press recently, only 0.25% of search results show offensive or questionable content (according to Google).
So while these changes won’t have a significant impact on most search results, they should assuage concerns for advertisers and marketers about appearing alongside offensive content and should incrementally improve the content within Google’s search results.
How it works
New quality rating guidelines for Google’s human ‘search quality raters’
For the past couple of years, Google has publicly published quality rating guidelines – the comprehensive set of guidelines given to Google’s search result testers. This 150+ page document outlines a wide range of factors for what should be considered a poor quality page.
This year, section 7 outlines what constitutes ‘low quality’ and includes new aspects such as specific examples of poor quality content (fake news, fake recipes and the like). Based on the events surrounding last year’s election, Google is putting more definition and emphasis around how they are minimising these results.
Ranking signal updates
Google states that they updated the technology used to determine rankings. While the exact changes are not expressed, the rankings will now likely assess context and authority of content more effectively in order to demote or remove these results, and replace them with sources and content deemed more authoritative.
Direct user feedback for auto-complete and snippets
Two months ago, Google launched a tool for users to report offensive auto-suggest queries. This feature has now rolled out across the entire Google landscape. When searching on Google, users will still see the auto-suggest results, but now a prompt at the bottom of the suggest box will give the user an option to report inappropriate predictions. Clicking on that link prompts the user to select which prediction was inappropriate and why (hateful, explicit etc.).
More than likely, Google will use this feedback to further improve ranking signals and help drive Rank Brain (Google’s machine learning algorithm).The same tool exists for the Featured Snippets results in Google, which are commonly used as a basis for Google Android and Home to answer queries. The latter is particularly important as the voice results have been identified as giving offensive answers to loaded questions.
We support Google’s proactive measures in addressing these issues to ensure results have more integrity for advertisers and users. In particular, we commend the measures being a mixture of manual review and user feedback along with algorithmic updates to better scale the improvements.
A core principle of machine learning is that as more data is available, the machine ‘learns’ over time and the results improve. With trillions of searches being done on Google, this process and the amount of data required to be effective is significant.
We also appreciate the scale and sensitive nature of what Google is undertaking here – which is, essentially, a form of content censorship across these trillions of searches that Google handles. It is a delicate balance between surfacing search results based on content’s ‘popularity’ and the integrity and accuracy of the information in that content.
These steps should be positive for brands and advertisers concerned about having paid search results appear next to offensive content. Since this is a human and technology endeavour, we expect there may be a ramp up over time to scale the removal of these offensive results.
From an organic search perspective, this should only serve to benefit ethical content marketers who cite proper sources and report accurate information in their content. It should also dilute malicious or inappropriate associations with brands in auto-suggest when they occur.
Finally, this update may impact the ranking of content in the Featured Snippet if it does not have significant authority and history.
Resolution continues to work closely with Google to ensure our clients’ campaigns are run optimally. If you have any questions, please reach out to your Resolution team.