SEO Best Practices for Canonical URLs + the Rel=Canonical Tag

Ideally, there should be only one copy of your pages. But for a dozen possible reasons, you have duplicates of the same page and content on the same website or many other contents. This often leads to algorithmic penalty from Google and other search engines where your search traffic and rankings begin dropping noticeably. The solution,  which was adopted to help solve this duplication issue is canonical tags also known as “rel=canonical”. They basically tell search engines which is the main page to index out of the duplicates. Of course, you know that without it, you are leaving the decision in the hands of Google which can significantly damage your page ranking. Google themselves support the use of this method to tackle duplicate issues And will not allow it affect your search rankings. However, canonical tags are not always perfect, at least if not done properly done. What is treated in this article are SEO best practices for canonical tags so that you can get the best out of them.

Rel=canonical

This is the popular way to canonicalize your multiple landing pages. There are other ways but most experts will recommend this to you. But as a best practice, you are adviced not to rely on just this particular method. Granted, it will succeed in telling any visiting search bot the version of your page to index and to end up concentrating the authority of all links to that particular URL which is the idea in the first place. But it is only perfect if search engines maintain their position on it forever. In the event they change their minds, it would prove damaging to have structured your SEO strategy based on this type of canonization.

301 Redirect

They are quite common canonization practice and you should be aware of them by now. What they really do is to redirect a visit to the duplicate content without them ever seeing the first copy. They are different from canonical tags since the latter do not redirect users at all. 301 redirect are used for when you do not want a user to see a certain page but don't want to remove it and lose out on any UX benefits it holds. If there are no technical reasons not to, as a best practice, you should always do a redirect provided it would not hurt your site. If it would, then set a canonical URL.

Use Passive parameters in the Google Search Console

To save you resources and time, make use of passive parameters in Google search console. Instead of implementing individual parameters on all your pages which may run in hundred, you shoulld log onto Google's search console and set them so that they are not crawled at all. What you would be achieving through this is that it tells Google the parameters you want it to consider passive. This way, once Googlebot sees that particular URL, it regards it as though it is inexistent.

Avoid cloaking

Google may penalize you if they suspect that your site has cloaked. This is when it notices that the variation if your page is totally different from the original in design, content and in its scope. It is basically presenting pages to search engines that are different from the ones a site owner presents to users. This is usually done to influence rankings and Google is quite strict on it. However, Google does not view the testings carried out with such tools as Content Experiments as cloaking. Therefore, avoid using the A/B testing platform to change the spirit of your website as this would probably be the result. The guideline is that your variations are not to change a user's perception of your original content but rather, stay true in spirit on both versions.

Include location hashes

This is also known as fragment URL and it does not refer to any difference in contents. In fact, the two variations are the same with just a little difference. What happens is that you take a block of a subsection of the content and place it in a different location like at the top, while the same is at the bottom in the other version. In themselves, these two versions contain the same content and responds to the same search intent. This way, Google is not going to see them as different URL, will not rank them differently or index them as two different URLs. This practice helps jump a user to a particular section of the content they are interested in instead of having them read through the entire thing.

 

Never block Google from crawling one URL and not the other

Using robots.txt will cause Google's bot crawlers not to reach the duplicate content. But at the same time, Google will still not allow  both variations posses the same content. So by blocking Google from crawling one URL and not crawl the other, you will only and up making that URL invisible to them alongside all of its SEO advantages. In detail, it means Google will have no idea of those URL's ranking signals, engagement and content signals or any other link that points towards that particular URL. The implication is that it loses any ranking advantage it might have got from those thereby defeating one of the ideas behind the use of canonical tags.

Prefer 301 redirect to 404

Avoid using 404 for non-canonical versions (your duplicate content). Using 401 will reduce your rankings for other URLs. Instead, use the 301 redirect. The only time 404 is recommended is when you have a new page or an error page or when the URL involved has no ranking signal. You need to be extremely sure about the last part if you have decided to use 404.

Conclusion

Canonization can be very helpful for your SEO especially for big websites. But like everything with significant advantage, it can also be risky if not properly implemented. But even as a small site, you need Rel=canonical as it is a powerful tool that promises much improvements for you.

No Comments Yet.

Leave a comment