Methods Employed to Avoid Google Indexing

Have you ever needed to prevent Google from indexing a distinct URL on your world wide web web site and displaying it in their look for engine results web pages (SERPs)? If you regulate web web pages very long enough, a working day will most likely appear when you have to have to know how to do this.

The 3 methods most commonly used to avoid the indexing of a URL by Google are as follows:

Working with the rel=”nofollow” attribute on all anchor features applied to url to the web page to stop the inbound links from remaining followed by the crawler.
Making use of a disallow directive in the site’s robots.txt file to avert the site from getting crawled and indexed.
Utilizing the meta robots tag with the content=”noindex” attribute to stop the page from currently being indexed.
Though the discrepancies in the three strategies appear to be refined at 1st glance, the usefulness can fluctuate dramatically based on which strategy you select.

Applying rel=”nofollow” to stop Google indexing

Numerous inexperienced webmasters try to avert Google from indexing a specific URL by making use of the rel=”nofollow” attribute on HTML anchor things. They incorporate the attribute to each anchor component on their website employed to connection to that URL.

Such as a rel=”nofollow” attribute on a hyperlink stops Google’s crawler from following the url which, in turn, prevents them from getting, crawling, and indexing the target webpage. Whilst this approach might perform as a shorter-time period solution, it is not a feasible prolonged-term resolution.

The flaw with this approach is that it assumes all inbound hyperlinks to the URL will involve a rel=”nofollow” attribute. The webmaster, even so, has no way to prevent other internet internet sites from linking to the URL with a adopted link. So the probabilities that the URL will ultimately get crawled and indexed utilizing this process is very significant.

Making scrape google of robots.txt to avert Google indexing

A different prevalent system utilised to stop the indexing of a URL by Google is to use the robots.txt file. A disallow directive can be included to the robots.txt file for the URL in problem. Google’s crawler will honor the directive which will avoid the page from getting crawled and indexed. In some scenarios, however, the URL can however seem in the SERPs.

In some cases Google will display screen a URL in their SERPs even though they have under no circumstances indexed the contents of that webpage. If ample internet web-sites connection to the URL then Google can often infer the matter of the website page from the backlink text of those people inbound hyperlinks. As a end result they will display the URL in the SERPs for related searches. Even though using a disallow directive in the robots.txt file will protect against Google from crawling and indexing a URL, it does not guarantee that the URL will under no circumstances appear in the SERPs.

Employing the meta robots tag to avoid Google indexing

If you require to avoid Google from indexing a URL although also stopping that URL from getting shown in the SERPs then the most effective strategy is to use a meta robots tag with a written content=”noindex” attribute within just the head factor of the world wide web web site. Of system, for Google to in fact see this meta robots tag they require to to start with be ready to find out and crawl the website page, so do not block the URL with robots.txt. When Google crawls the web site and discovers the meta robots noindex tag, they will flag the URL so that it will under no circumstances be proven in the SERPs. This is the most powerful way to prevent Google from indexing a URL and displaying it in their research effects.