Forum OpenACS Q&A: Re: Approach to Google-optimizing

Collapse
Posted by Dirk Gomez on
A good search engine should try to behave like an experienced web surfer. How do YOU rate a page?

You read the first few paragraphs and then decide on whether it makes sense to continue through the rest, so you rate the first bytes higher.

You look at the URL and decide upon whether it is dodgy or trustworthy.

You look at the bold and big letters. Hence a search engine should rate h2 and h3 higher.

You don't care about meta tags, hence a good search engine will silently ignore them as well.

I wouldn't even be astonished if average response time per transferred bytes weren't a metric. The slower the site, the worse it usually appears.

How much of the site appears to be original content, to what extent is it just a metasite - original content being a ton more interesting. e.g. the features section in Greenpeace links to a whole lot of different sites and gives the uninitiated bot the impression that the *major* navigation bar links to other sites. It knows that this is the major navigation bar because most sites that have links to the left use it for navigation.

Then: what do you want to be indexed? What are people looking for when they look for Greenpeace? greenpeace.org or some particular content...what would be ten search terms where Greenpeace should be ranked prominently. Which story or page seems to deserve a high ranking for any of these terms?

If we then look at the application - page - we might ponder why it doesn't get the rating it may deserve.

(All this assumptions. Remember that google said 2 years ago that they have more than 100 heuristics per page. :))