How Search Engines Work

Before going for Search Engine optimization (SEO) one has to understand how Search engine works. This doesn't include the program code or algorithm, But a basic understanding of the Google Search Engine Algorithm.

            There are many thousands of websites that are being evolved every day which provide us useful information. But the websites are not known to the people all around the world until and unless he/she knows the URL. This is not the optimal use of web where we have opportunity to access without the limitation of distance.
·         The web's first search engine, on September 2, 1993 by Matthew Gray, at MIT which is a Perl based and used it to generate an index.
·         The web's second search engine Aliweb appeared in November 1993 being notified by website administrators of the existence at each site of an index file in a particular format.
·         JumpStation (released in December 1993) used a web robot to find web pages and to build its index, and used a web form as the interface to its query program for crawling, indexing, and searching.
Different Search Engines
There are many search engines which counts to more than 100. Of all them the popular ones are Google, yahoo and Bing.  And each use different strategy for finding.
The search engines use software Robots called “Spiders” which crawl and build list of words found on websites.

How “Spiders” work- starting point is it lists heavily used servers and popular pages.
             
                   A “web crawler/ Spider” is a program or automated script that browses the internet seeking web pages to process.  The crawler browses this URL and then seeks for hyperlinks. Then the crawler browses those links and moves on the same way. This is called web crawling or spidering.
In Google each spider could keep about 300 connections to Web pages open at a time.