SEO Friendly Content Management Systems

Over the years I’ve received a few questions on this blog or at conferences about search engine friendly Content Management Systems (CMS). Is there such a thing? What are the best ones?

Before the days of WordPress, there were really only a couple of CMS options that provided the flexibility for search engine optimization. But now there are SEO friendly CMS systems popping up everywhere.

But before you decide on a CMS for your site, there are still some crucial aspects you need to consider.

Stephan Spencer has written a helpful article to this effect called How to Choose Content Management Systems for SEO. He breaks down CMS features into the following categories:

  • critical
  • important
  • desirable
  • optional

Then he explains why the various CMS features meet those categories and how they impact SEO.

Great stuff!

Share this post with others

Q and A: Can changing my CMS affect Rankings ?

Question

Hi Kalena

We recently moved from a custom CMS system to Expression Engine. Overall I like it, though we have seen a bit of a drop in Google referrals despite keeping the design and layout of our pages (largely) the same. One thing I noticed is that previously all our individual article pages ended in ‘.html’ or ‘.shtml’ whereas with the new CMS they all just end in a slash. So my question: does Google give priority to content that ends with a ‘known’ HTML ending like .html or .shtml, or doesn’t it care?
- Dave

Hi Dave,

The Search Engines won’t give a ranking preference on the basis of the filename or URL. Whether it is a .html, a .php an .asp or even a .pdf doesn’t matter – as long as they are able to crawl, and index your pages, the extension (or lack thereof) is irrelevant.

However, it sounds to me like although you say that the design and layout of you pages is much the same – the names of those pages (i.e. the URL used to access the page) has changed.  This is quite a common issue when switching CMSs, and unless you are careful, you can lose much of the credibility (and hard earned rankings) achieved by your old site.

If any of the pages in your new site have a different URL
- they will NOT show up in Search Results until they have
been re-crawled and indexed by the search engines.

To check to see which pages on your site have been indexed, do a Google search for site:yourdomain.com (substituting your own domain name of course).  This will provide a list of all the pages currently indexed by Google.  It may include a mixture of Old and New pages. Try clicking on the old Page links – if they still come up with the old pages or you get 404 (file not found) errors – read on and I’ll explain how to fix this.  If they do still link to the old pages, then you may want to delete these from your server too.

Page Redirects

It is critical that as part of any site redesign process you ensure that that you put in place page redirects – this will ensure that anyone trying to access one of your old pages, will be redirected to the new page.  This is clearly important from the user perspective – to ensure that they get the current information (and not some old – out of date page).  But it is also important from an SEO perspective – any links to the old page (from external sites) need to go to the New page, and the search engines (who have presumably indexed your old pages) also need to be told that a new page exists.

Notifying Search engines and fixing backlinks for all your pages may sound like a very daunting task, however, there fortunately is a (reasonably) simple solution – our friend the 301 redirect.

301 Redirects are a server based redirect and are reasonably easy to setup (although can be a little technical, so you may need help from your developer).  The actual technique will vary, depending on your server environment, but effectively a 301 redirect will simply redirect visitors trying to access your old pages to the correct new page.  Also know as a permanent redirect, 301 redirects also tell Search Engines that this is a permanent change, and to update their index (and ranking data) accordingly.

you can find more posts about 301 redirects on this site – or for some more technical info I suggest that you take a look at this good overview on 301 redirect techniques by Steven Hargrove.

Andy Henderson
Ireckon Web Marketing

Share this post with others

Q and A: Why can’t I see my Alt Img tags?

QuestionHi Kalena

I have been practising on my own site.  When I add an alt img tag I still cannot see the text when I scroll over the image.  I don’t understand this, could you please help? My URL is [URL removed for privacy reasons]. There is no alt img tag at present (I took it out because it didn’t seem to work).

Thanks in advance and regards,

Barry

—————————————————————————————————

Hi Barry

If you’re using Firefox, you won’t see alt tags when you mouseover. But if you right click on the image with your mouse and view *properties*, you should see your alt text in the alt field.

Or you could just view your site in Internet Explorer where the mouseovers should work fine.

Regardless of which browser you use, search engines will be able to index your alt tags. Plus text to speech software will be able to read them for visually-impaired visitors, so you should include them wherever possible for site usability purposes.

Share this post with others

Q and A: Will two sets of header information effect our ranking?

QuestionDear Kalena…

Our Web site uses a layered navigation scheme which pulls content (formatted as its own page) into a template which wraps the top, left and bottom navigation (also its own page) around the content page. This results is two sets of header tags when the page is loaded in a browser.

Will two sets of header information effect our ranking?

We have a script that pulls the title tag from the content page and displays it at the top of the two combined pages. I’m hoping to hide the second title by hiding it in design notes. If I have design notes in my HTML code, will search engines ignore it?

Thanks

Brad

Dear Brad

Years ago, having two sets of header tags in a document would cause considerable display issues for some browsers but as they’ve evolved (to accommodate for poor coding and situations like this), you most likely won’t have too many browser-related problems.

However, from an SEO point-of-view it would be best if you could avoid unnecessary header tags. The search bots navigate pages from top to bottom, so by default, it will use the header data from the first tag and technically should ignore the information contained in the second one. But having two such tags bloats the code (even if it’s commented-out) and creates unnecessary information that the search bots have to scan, even though it provides absolutely no value to the page.

If the pages being pulled into the template aren’t designed to be viewed or indexed without the layered navigation system you’re using – then really you shouldn’t even need to have heading tags on these docs? Or perhaps as another alternative, have an additional script that runs and only imports/displays all data below the tag.

Hope this helps

Peter Newsome
SiteMost SEO Brisbane

Share this post with others

Webstock 09 : Bruce Sterling – The Web is all Turtles and Duct Tape

Live blogging The Short and Glorious Life of Web 2.0 presentation at Webstock 09 by Zeitgeist Author and Wired Blogger, Bruce Sterling.

Bruce starts by saying, here in New Zealand, we have lost sight of Web 2.0. Mistakes have been made. You think it’s the world of tag clouds, drop shadows and fonts.

Web 1.0 was the Britannica online while Web 2.0 was Wikipedia. Web 1.0 was portals while Web 2.0 was search engines. The canonical definition of Web 2.0 was coined by Tim O’Reilly: “.. the network as platform spanning all connecting devices, apps that make the most of blah blah blah…..”

The definition is thesis-long and reads like a Chinese takeout menu says Bruce. He then showed a slide of the visual flow chart of the defintion (see below):

Web 2.0 looks like a social network. Add some scenery and pictures to this Web 2.0 diagram and it becomes a Webstock Conference (at this point there is some sniggering in the audience).

You can’t break it down and analyze it. What’s exciting about this 5 year old flow chart is the pieces that are utter violations of previous common sense e.g. the web as platform. Native web logic is a new turtle, sitting on another, older turtle, sitting on another older still turtle. Just like platforms sitting on clouds. (This imagery has me grinning because I actually have a ceramic representation of the turtles on turtles analogy on my bookshelf).

AJAX is an acronym. How the hell can you make an acronym of an acronym? (more sniggering). Everybody knows that Web 2.0 with it’s JavaScript binding everything is made out of AJAX. After all Sun built Javascript. Javascript is the duct tape of Web 2.0 – it’s the ultimate material that will bind anything. It’s the glue of mashups.

Bob Metcalf, the inventor of ethernet had to eat his words claiming that the Internet would fall over. We’ve used JavaScript to duct tape the turtles all the way down. What’s with this blog business? Most of the things we call blogs today have zero to do with weblogs. True Weblogs are basically records of web surfing. Bruce’s own *blog* is consumed with link rot. He blogged stuff that is now in mystical 404 Land. (At this point the sniggering in the audience has turned to a little bristling and some vexed looks. Tweets fly about the room with the same theme – is Bruce Sterling giving us geeks a public spanking for worshipping Web 2.0?)

The phrase Web Platform is weird. Up there with *wireless cable* and *business revolution*. What about *dynamic content* – content is static for Pete’s sake. It is not contained.  And don’t forget *collective intelligence*. Google apparently has it and therefore it matters. Businessmen and revolutionaries alike use Google.  Bruce sees Larry and Sergey as the coolest Stanford grads ever, with their duct-tape ridden offices (more laughter).

Geek thought crime is the assumption about what constitutes *collective intelligence*. This attitude makes you look delusionary. He’d like to see a better definition such as: *semi autonomous data propagation*. I paid attention to Web 2.0 because I thought it was important. I supported Tim’s solar system invention and thought Web 2.0 people were a nifty crowd. The mainframe crowd were smarter than Web 2.0 people – the super selective technical elite. Problem was that all sense of fun had been boiled out of them.

The telephone system was the biggest machine in human history, but the users couldn’t access the cables or the pipeline. Unlike now = where everyone gets their hands on the components. But I’m not nostalgic for the old days, after all nostalgia is not what it used to be.  Look at Microsoft: the place where innovations go to die (loud guffaws, including one from me and we all rush to tweet that little gem).

Next for the web is a spiderweb in a storm. Some turtles get knocked out. The Fail Whale fails. Inherent contradictions of the web get revealed. Prediction: the web stops being the fluffy meringue dressing of business. What kind of a world do we live in when pirates in Somalia can make cell phone sonar calls via super tankers? We’ve got a web balanced on top of a collapsed economy.

Next is a transition web. Half the world’s population is on the web and the rest are joining. We need to know how to make the transition. During Web 2.0, we sold ourselves to Yahoo. In the transition web, we have no safety net. We’re all in the same boat. I’m bored of the deceipt, disgusted with cynical spin etc. etc. Let’s get on with real lives. (Bruce’s rant continues, but at this point I am seriously rapt and stop typing to be able to pay more attention).

To experience the full spanking by Bruce, see his own transcript of the presentation.

Share this post with others