The search engine optimisation (SEO) process consists of designing, writing, and coding web pages to increase the likelihood that they will appear at the top of search engine results for targeted keyword phrases. Many so-called SEO experts claim to have reversed engineered search engine algorithms and use strategically created “doorway pages” and cloaking technology to maintain long-term search positions. Despite all of these claims, the basics of a successful search engine campaign have not changed in all the years we have provided these services.
To get the best overall, long-term search engine positions, three components must be present on a web page:
All of the major search engines (AltaVista, FAST Search, Google, Lycos, MSN Search and other Inktomi-based engines) use these components as a part of their search engine algorithms. Sites that have (a) all of the components on their web pages, and (b) have optimal levels of all the components perform well in the search engines overall.
Since the search engines build lists of words and phrases on URL’s, then it naturally follows that in order to do well on the search engines, you must place these words on your web pages in the strategic HTML tags.
The most important part of the text component of a search engine algorithm is keyword selection. In order for your target audience to find your site on the search engines, your pages must contain keyword phrases that match the phrases your target audience is typing into search queries.
Once you have determined the best keyword phrases to use on your web pages, you will need to place them within your HTML tags. Search engines do not place emphasis on the same HTML tags. For example, Inktomi reads Meta tags; Google ignores Meta tags. Thus, in order to do well on the entire search engines, it is best to place keywords in all of the HTML tags possible, without keyword stuffing. So no matter what the search engine algorithm is, you know that your keywords are contained in your documents
The strategy of placing keyword-rich text in your web pages is useless if the search engine spiders have no way of finding that text. The way your pages are linked to each other, and the way your web site is linked to other web sites, does impact your search engine positions.
Even though search engine spiders are powerful data-gathering programs, HTML coding or scripting can prevent a spider from effectively crawling your pages. Examples of site navigation schemes that can be problematic are:
1. Poor HTML coding on all navigation schemes: Browsers (Netscape and Explorer) can display web pages with sloppy HTML coding; search engine spiders are not as forgiving as browsers are.
3. Dynamic or database-driven web pages: Pages that are generated via scripts, databases, and/or have a?, &, $, =, +, or % in the URL can present spider “traps.”
4. Flash: Currently, none of the search engines can follow the links embedded in Flash documents.
Therefore, to ensure that the spiders have the means to record the data on your web pages, we recommend having two forms of navigation on a web page: one that pleases your end users, and one that the search engine spiders can follow.
The popularity component of a search engine algorithm consists of multiple sub-components:
Web page popularity
Obtaining links from other sites is not enough to maintain optimal popularity. The major search engines and directories are measuring how often end users are clicking on the links to your site and how long they are staying on your site (i.e., reading your web pages). They are also measuring how often end users return to your site. All of these measurements constitute a site’s click-through popularity.
The search engines and directories measure both link popularity (quality and quantity of links) and click-through popularity to determine the overall popularity component of a web site.