Update: Google no longer labels results with “Supplemental Result”. Nevertheless, prescription the penalties still exist.
“Supplemental Result” – the term no business wants to see next to their website on the search page. Unfortunately though, billions of sites reside in Google’s purgatory, doomed to receive only minimal traffic. Why does the supplemental index exist? How can a website get out of it? The following post will suggest 4 ways to get out of Google’s supplemental results.
Get Rid of Duplicate Content
90% of the time, a website is in the supplemental index because it contains content (usually text), that exists on many other websites. A few examples of this would be manufacturer product descriptions or syndicated content such as news or articles. How do you fix this? Simple, re-write content and make it original to your website.
How do you know if your content is duplicate? Try this duplicate content checker:
Cleanup Nasty, Complicated URLs
Many websites, especially sites that rely on database content, use query strings in the URLs. For example, http://www.url.com/ID=XXX&SOURCE=YYYY. Unfortunately, many search engine spiders don’t like crawling these pages. One reason is that engines consider each url as a unique page, even if it has the same content. So although the page http://www.url.com/ and http://www.url.com/id=123 may have the exact same content, you may incur a duplicate content penalty.
What’s the solution for this? Try to minimize using parameters if possible. While using them for external campaign tracking may be alright, definitely avoid them in the internal linking structure.
Many websites now are re-writing their ugly URL’s with re-writing software. This also has search engine optimization benefits as well. Software such as this is available for both Apache and Windows servers. For more info, checkout this Wikipedia article.
Build Deep Links
While links to the homepage are nice, targeted links to product detail pages within your site will ensure that pages don’t get orphaned and ignored by the spiders. If the crawler always starts at your homepage, it’s likely it will abandon the session before crawling your whole site. Deep linking to content gives spiders a new starting point to find fresh and relevant pages.