Search engine optimization, in its most standard sense, relies upon one thing above all others: Online search engine spiders crawling and indexing your site.
But nearly every website is going to have pages that you don’t want to consist of in this expedition.
In a best-case circumstance, these are doing nothing to drive traffic to your site actively, and in a worst-case, they could be diverting traffic from more important pages.
Thankfully, Google permits web designers to tell search engine bots what pages and content to crawl and what to ignore. There are a number of ways to do this, the most common being using a robots.txt file or the meta robots tag.
We have an excellent and detailed description of the ins and outs of robots.txt, which you must certainly check out.
However in top-level terms, it’s a plain text file that resides in your site’s root and follows the Robots Exclusion Procedure (ASSOCIATE).
Robots.txt offers crawlers with directions about the site as a whole, while meta robots tags consist of instructions for specific pages.
Some meta robotics tags you might use include index, which tells search engines to include the page to their index; noindex, which tells it not to add a page to the index or include it in search engine result; follow, which advises an online search engine to follow the links on a page; nofollow, which informs it not to follow links, and a whole host of others.
Both robots.txt and meta robotics tags work tools to keep in your toolbox, however there’s likewise another way to instruct search engine bots to noindex or nofollow: the X-Robots-Tag.
What Is The X-Robots-Tag?
The X-Robots-Tag is another way for you to manage how your websites are crawled and indexed by spiders. As part of the HTTP header reaction to a URL, it manages indexing for a whole page, along with the particular components on that page.
And whereas using meta robotics tags is fairly uncomplicated, the X-Robots-Tag is a bit more complex.
However this, obviously, raises the concern:
When Should You Use The X-Robots-Tag?
According to Google, “Any directive that can be used in a robots meta tag can also be defined as an X-Robots-Tag.”
While you can set robots.txt-related directives in the headers of an HTTP action with both the meta robotics tag and X-Robots Tag, there are particular situations where you would wish to utilize the X-Robots-Tag– the two most common being when:
- You wish to manage how your non-HTML files are being crawled and indexed.
- You wish to serve regulations site-wide rather of on a page level.
For example, if you want to obstruct a specific image or video from being crawled– the HTTP reaction approach makes this easy.
The X-Robots-Tag header is likewise useful since it permits you to combine multiple tags within an HTTP response or use a comma-separated list of directives to define directives.
Possibly you don’t desire a certain page to be cached and want it to be unavailable after a specific date. You can utilize a combination of “noarchive” and “unavailable_after” tags to instruct search engine bots to follow these instructions.
Essentially, the power of the X-Robots-Tag is that it is much more flexible than the meta robots tag.
The advantage of using an X-Robots-Tag with HTTP responses is that it enables you to use regular expressions to execute crawl instructions on non-HTML, in addition to apply criteria on a bigger, worldwide level.
To help you understand the difference in between these directives, it’s valuable to categorize them by type. That is, are they crawler directives or indexer directives?
Here’s a helpful cheat sheet to explain:
|Crawler Directives||Indexer Directives|
|Robots.txt– utilizes the user agent, permit, prohibit, and sitemap instructions to define where on-site online search engine bots are enabled to crawl and not permitted to crawl.||Meta Robotics tag– allows you to define and avoid online search engine from showing specific pages on a website in search results page.
Nofollow– allows you to specify links that should not pass on authority or PageRank.
X-Robots-tag– allows you to manage how specified file types are indexed.
Where Do You Put The X-Robots-Tag?
Let’s say you want to block particular file types. A perfect method would be to add the X-Robots-Tag to an Apache setup or a.htaccess file.
The X-Robots-Tag can be added to a site’s HTTP actions in an Apache server configuration via.htaccess file.
Real-World Examples And Utilizes Of The X-Robots-Tag
So that sounds great in theory, but what does it look like in the real life? Let’s have a look.
Let’s say we desired online search engine not to index.pdf file types. This setup on Apache servers would look something like the below:
In Nginx, it would look like the listed below:
place ~ * . pdf$
Now, let’s take a look at a different circumstance. Let’s say we wish to use the X-Robots-Tag to obstruct image files, such as.jpg,. gif,. png, and so on, from being indexed. You could do this with an X-Robots-Tag that would appear like the below:
Please keep in mind that comprehending how these directives work and the impact they have on one another is crucial.
For instance, what takes place if both the X-Robots-Tag and a meta robots tag are located when crawler bots discover a URL?
If that URL is blocked from robots.txt, then certain indexing and serving directives can not be discovered and will not be followed.
If regulations are to be followed, then the URLs containing those can not be prohibited from crawling.
Check For An X-Robots-Tag
There are a few different approaches that can be used to look for an X-Robots-Tag on the site.
The simplest method to check is to install an internet browser extension that will tell you X-Robots-Tag information about the URL.
Screenshot of Robots Exclusion Checker, December 2022
Another plugin you can utilize to determine whether an X-Robots-Tag is being utilized, for instance, is the Web Developer plugin.
By clicking on the plugin in your browser and browsing to “View Reaction Headers,” you can see the various HTTP headers being used.
Another approach that can be used for scaling in order to identify problems on sites with a million pages is Screaming Frog
. After running a site through Screaming Frog, you can browse to the “X-Robots-Tag” column.
This will reveal you which areas of the site are utilizing the tag, in addition to which specific regulations.
Screenshot of Yelling Frog Report. X-Robot-Tag, December 2022 Using X-Robots-Tags On Your Website Understanding and controlling how search engines engage with your website is
the cornerstone of seo. And the X-Robots-Tag is a powerful tool you can use to do simply that. Just understand: It’s not without its dangers. It is extremely simple to make a mistake
and deindex your whole website. That said, if you’re reading this piece, you’re probably not an SEO beginner.
So long as you utilize it carefully, take your time and inspect your work, you’ll discover the X-Robots-Tag to be an useful addition to your arsenal. More Resources: Included Image: Song_about_summer/ Best SMM Panel