How does it work: Search engines explained
The main characteristic of search engines is how they are indexing web pages as they go crawling the internet with an automated tool called a spider - This tool is no more than a browser like program following links from page to page, from one domain to an other.
Actually when you submit your site to Google for example, you just tell the spider, the googleBot for instance, that it could visit your site.
What about directories ?
First they don't use spiders. They use real people called editors, to validate the quality and the relevance of your website. Because they are checked by a human being, the quality of a search in a directory - we are talking here about the main ones such as yahoo or Dmoz - is superior to the full automated search engine result page ( SERP ).
Probably because human being can not be replaced, search engines happen to visit directories to help their ranking algorithm. For example Google visits the open directory (Dmoz.org) on a regular basis.
It reads pages, follow links, indexes what it can. It will look at the source code of a page to determine what the user can see. It won't be able to read the Flash design and images and that is the reason why creating a website entirely in Flash or clickable images is terrible for SEO.
This part analyses different elements of a page: the title, heading text, bold text, bulleted lists, links and others before dumping its results into the database.
We probably should say the data centers - they store replicas of web sites previously indexed. How search engines store data will be explained in a further article.
This is the interesting part for search engine optimization if explained by professionals. The algorithm will calculate the actual position of a page for a given search term.
It's the only visible element, the interface is what you see when you land on MSN, Google, Yahoo and others. You enter you text in the search box, a query is sent to a data center that returns a result.
Is it the same algorithm for all of them?
Obviously search engines use different criteria to rank web pages. The search engine result page vary from one engine to another, although when you get exactly the same ranking it means that the data is shared. An example of shared data is Google sending results to AOL, Netscape and Iwon for the organic search and receiving the directory listing of DMOZ, the open directory project.
How do search engines rank websites?
Crawler based engines sort websites based on their specific algorithm. Results are not human reviewed so they may pull out a few non relevant sites on certain queries. The spider will look at specific area on a page and match them with a query. With Google started the off-page factors to of the ranking, also called the link popularity, those factors are stated less easily influenced by webmasters and would give more accuracy to the SERP. Off the page factors influencing a ranking are the number of inbound links and their text.
This article about search engines explained that they are the sites that collect the data from the internet so that sites, information and content can be found more easily.