How to Use Crawl Budget Efficiently.

09 February 2017 min SEO

Less is More. Take Control of Your Crawled Pages More input doesn't always result in more output. This statement doesn‘t apply in all disciplines of Digital Marketing, but it certainly does in SEO. The way you enable search engines to access your website determines...

More input doesn’t always result in more output. This statement doesn‘t apply in all disciplines of Digital Marketing, but it certainly does in SEO.

The way you enable search engines to access your website determines their overall perception of its quality. An inefficient use of crawl budget will result in your overall SEO performance being relatively low and not meeting its potential.

In your IA (Information Architecture), find the right balance between page granularity and high crawl efficiency. Make it a priority to redirect error pages that are being returned to Google, avoid broken links and redirect chains. Refrain from using parameter URLs, they tend to increase the number of URLs depending on the technical setup and area of implementation. Set up redirect rules for the most common types of duplicated content, such as uppercase URLs, http vs. https, URLs w/ www. vs. w/o www to name a few.

What is Crawl Budget?

Crawl budget is the amount of resources search engines spend to crawl a website. The budget allocated by search engines to a particular site, is largely influenced by the number and quality of external links (backlinks) from other websites.

Common Crawling Issues

A common misconception is that it is best practice to provide search engines with as many pages as possible. As a result, one of the KPI’s that is being reported on is the number of pages that are being indexed and displayed in Google’s search results when doing a site query for a particular domain (i.e. site:2019.resolutiondigital.com.au). Assuming that this number reflects the actual indexation status (which it rarely does for bigger websites), the only thing that this result is expressing is a pure number – without any context. It reflects quantity, not quality – and the missing context makes this number unusable.

An issue often experienced in online shops is a vast number of URLs indexed that are generated by product filters on category pages.

Filters are great for users, as they allow a more narrowed down search, using attributes like colour, size, price, brands, or product specifics like areas of application.

Search engines’ ability to crawl the website is affected when all of the filters can be applied in each variation possible, causing an infinite number of possible URLs that are potentially going to be crawled by search engines. Due to the total number of possible combinations, each single combination is so rare that the individual interest in each one is very low and does not provide enough value to be indexed by Google. Similar observations can be made with internal search result pages, or tag pages on content-heavy websites such as blogs.

Crawl budget is also taken up by issues like Duplicated Content (the same page under different URLs), 404 error pages (pages that Google attempted to crawl but got a ‘Page not available’ error returned), and outdated content that no longer provides value.

Google applies 100% of crawl budget to the total number of pages / URLs on a website. If 50% of URLs are pages of very little SEO value, Google is only able to spend 50% of the overall crawl budget on pages that do provide enough value to be listed in organic search.

How to Fix Crawling Issues

Many of theses issues can be attributed to (mis)configuartions in the server or CMS / shop system setup and can only be fixed by adjusting these. This applies to filters, internal search results, most cases of duplicated content, or 404 error pages. However, the way these issues are being solved is largely dependent on the underlying SEO strategy and the technical capabilities of the servers and systems used.

IA related issues usually require an audit of content structure and usage. Depending on the degree of the gap, a re-structure of the IA should be considered.

Inefficient use of crawl budget usually represents an underlying structural problem that – once the root cause is fixed – causes fewer problems going forward. Given its effects on rankings and traffic, the crawlability of a website should be analysed as a matter of priority, and time and resources should be made available to fix identified issues.

SEO is an ongoing and continuous process. Fixing structural issues at the start will pave the way for better and faster SEO results in the long run.

 Written by:
Veronika Schyra

Date Published:
February 2017

Contact us to discover how we can deliver Remarkable results for you.

Insights & News

Insights 1.

Resolution Digital wins El Jannah

From feeding hungry locals in Granville during the late 90s, El Jannah has quickly derived a cult following for its legendary Lebanese-Australian charcoal chicken and even more renowned garlic sauce.
Insights 1.

Understanding the impact of User Trackability decline- Feb 2024

User Trackability is getting stricter, and based on our proprietary research, we are seeing browser trackability drop to 47% on average in the last 15 months. The new report helps marketers understand what it means for remarketing, conversion tracking and more.

Stay informed with our latest insights & news

Our Offices

Sydney.

Gadigal Country
Bay 7, 2 Locomotive Street
South Eveleigh, NSW, 2015

Visit your Sydney office

Melbourne.

Wurundjeri Country
Level 6, 650 Chapel Street
South Yarra, VIC, 3141

Visit your Melbourne office

Brisbane.

Turrbal and Jagera Country
200 Adelaide Street
Brisbane City, QLD, 4000

Visit your Brisbane office
l

Contact Us