Q and A: Why should you prevent robots from indexing PPC landing pages?

QuestionHi Kalena

Quick question.

One of the lessons in your SEO201 course says that if you run PPC campaigns and use landing pages for these campaigns that are similar in layout and content, you should prevent search engines robots from indexing them. Please explain why?

In my thoughts, the more files the search engines index the more exposure you may get.

Thanks
Alex

—————————————————————————————————

Hi Alex

PPC landing pages can often look nearly identical and the only difference between them is the target keywords used. Web pages that look too similar are sometimes filtered out of the search results as duplicate content. Too much duplicate content on a domain may impact it’s ability to rank highly, therefore I always recommend preventing robots from indexing landing pages, using your robots.txt file.

If you are using existing pages on your site as PPC landing pages and they aren’t too similar to each other, there is no need to block robots from indexing them. Make sense?

Share this post with others

Q and A: Does Google automatically search sub-directories?

QuestionDear Kalena…

Does Google automatically search sub-directories? Or do I have to have a ‘Links’ page to force google to index the sub-directories?

Also, I was reading about ‘redundant’ content. I have a business directory which will eventually have thousands of pages with the only main difference in content being: {Company} {City} {ST} and {Subject1}. Will Google view this as redundant content?

Best Regards,

Steve

Dear Steve,

For Google to index your sub-directories, you will need some links pointing to them. These links can simply be internal navigation links and if you have a large website, it’s also advisable to include a sitemap that links to all your pages and sub-directories within your site.

In regards to your redundant content query – it’s best SEO practice to have at least 250 words of unique content per page. So if all the pages are the same other than the contact details – then yes, it would be considered redundant content.

My advice would be to offer a one-page listing for each company and on that page have a small blurb about the company, their contact details and a feature that allow users to add feedback/comments/reviews. This should provide enough information for Google to index without causing redundant or duplicate content issues.

Hope this helps!

Peter Newsome
SiteMost

Share this post with others

Q and A: Will building a version of my site in another language create duplicate content issues?

QuestionHi Kalena

I was wondering if you would be able to give me some insight on a question that I have. I am working on launching a spanish version of my company’s website. It’s a mirror of our current site where when a user goes in should they select Spanish they can view the pages in Spanish.

Will this pose a problem to SEO if the pages remain the same name?  Our hosting company has created a new folder where the spanish files sit, and the structure mirrors the English version of the site.   If the spanish version of the website is set up this way, will the search engines consider these duplicate pages?

Thank you,
Heather

Hi Heather

If the mirror pages are in Spanish, then they are not duplicates and won’t be treated as such. Smile! You have nothing to worry about.

Share this post with others

Q and A: Do regional domains constitute a duplicate content problem?

QuestionDear Kalena…

First of all I find the info on your site extremely useful –  I always look forward towards the newletter! I have been trying to find the time to do the SEO course but finding the time is always a problem! However, its still on my to do list.

I am trying to sort out a problem regarding duplicate content on my sites. We run local sites for each language/country we trade in (e.g. .fr for France and .co.uk for England). Unfortunately whilst growing the business I never had time to research SEO optimisation practices so I ended up with a lot of sites with the same duplicate content in them including title tags, descriptions etc. I had no idea how bad this was of course for organic ranking!

I have now created unique title tags and description for ALL the pages on ALL the sites. I have also changed the content into unique content for the home page and the paternity testing page (our main pages) for each site in English. The only site with complete unique content pages is .com and parts of .co.uk. For the rest of the pages that still have double content I have also put a NO INDEX, FOLLOW code on the pages that have duplicate content so that the spiders will not index the duplicate content pages. I did a FOLLOW as opposed to NO FOLLOW as I still want the internal links in the pages to be picked up – does this make sense ?

Also having made such changes how long does it normally take for Google to refresh its filters and starting ranking the site? The changes are now about a month old however the site is still not ranking.

Also should this not work – do you have any experience with submitting a re-consideration through the webmaster tools? What are the upside and downside of this?

Any advice would be greatly appreciated.

Regards
Kevin

Dear Kevin

Thanks for your coffee donation and I’m glad you like the newsletter. Now, about your tricky problem:

1) First up, take a chill pill. There’s no need to lodge a reinclusion request to Google. According to Google’s Site Status Tool, your main site is being indexed and hasn’t been removed from their datacenter results. A standard indexed page lookup shows 32 pages from your .com site have been indexed by Google, while a backward link lookup reveals at least 77 other sites are linking to yours. If you’ve put NoIndex tags on any dupe pages, you’ve covered yourself.

2) Next, pour yourself a drink and put your feet up. Your .fr site is also being indexed by Google, but there isn’t a dupe content issue because the site is in French, meaning that Googlebot sees the content as being completely different. Your .co.uk site is also being indexed by Google and again, there isn’t a dupe content issue because it looks like you have changed the content enough to ensure it doesn’t trip any duplicate content filters.

3) Now you’re relaxed, login to Google Webmaster Tools and make sure each of your domains are set to their appropriate regional search markets. To do this, click on each domain in turn and choose “Set Geographic Target” from the Tools menu. Your regional domains should already be associated with their geographic locations i.e. .co.uk should already be associated with the UK, meaning that Google will automatically be giving preference to your site in the SERPs shown to searchers in the UK. For your .com site, you can choose whether to associate it with the United States only (recommended as it is your main market), or not to use a regional association at all.

4) Now it’s time to do a little SEO clean up job on your HTML code. Fire or unfriend whoever told you to include all these unecessary META tags in your code:

  • Abstract
  • Rating
  • Author
  • Country
  • Distribution
  • Revisit-after

All these tags are un-supported by the major search engines and I really don’t know why programmers still insist on using them! All they do is clog up your code and contribute to excessive code bloat.

5) Finally, you need to start building up your site’s link popularity and boost your Google PageRank beyond the current 2 out of 10. And by link building, I mean the good old-fashioned type – seeking out quality sites in your industry and submitting your link request manually, NOT participating in free-for-all link schemes or buying text links on low quality link farms.

Good luck!

Share this post with others

Q & A: Duplicate content with dynamic sites.

QuestionDear Kalena…

I’m working on a CFM database driven site and Google thinks we have hundreds of duplicate title tags and descriptions because pages on the site can be accessed using the normal page # and/or the page # plus navigation query strings.

Example: (these 3 urls all go to the same page and Google is logging them as 3 different pages in my Google Webmaster View)

1) body.cfm?id=19‎

2) body.cfm?id=19&oTopID=19‎‎

3) body.cfm?id=19&oTopId=62‎‎

To avoid a duplicate content penalty I cleaned up my sitemap.xml to only include the page # with no query strings. (Example = body.cfm?id=19‎) In my robots.txt file I’ve also added the disallow code to block any file with ‘TopId’ in the url. I’m hoping this will help…have you experienced this type of problem before?

Thanks! Mitch

Dear Mitch,

Your question was the source of some debate over here, so thanks for bringing it up! There is a question as to whether Google will actually index pages with session ID’s, and the general thinking is no, so you may be in the clear.

You seem to be handling the problem of duplicate content with database driven sites well, however. It’s best to pick one of the URL’s to include in your site map. You can also be sure NOT to link to any of these pages with duplicate content, from within your site. If you do need to link to these pages, be sure to use “no follow” tags on your link.

Best of luck, Nick Loeser

TheSmallMerchant.com

Share this post with others