Free Directory Submission of Web Pages Address to Search Engines such as Google and How to Submit and Advertise for Success!

Erin Kilgour, Staff Writer, 9-14-2009

Web directories or link directories as they are also known are lists of websites present on the World Wide Web. These specialize in linking to numerous websites and categorize them accordingly to various topics. Web directories are not search engines as they do not display web pages on the basis of keywords. Here the websites are categorized and displayed accordingly with subcategories to differentiate between products and services. They allow free directory submission to get people to list in their sites. Sites are categorized based on the whole website unlike singular web pages as set of keywords as searched by the search engine. While free directory submission is allowed, the number of categories a site can be listed under is limited. Web directories allow web masters to submit the site for inclusion through free directory submission and then these are reviewed and edited by humans for acceptance. The process of free directory submission is quite similar to the process followed when you submit web address to Google. Most directories have a general scope for listing websites which spans a wide range of classes, regions and languages. There are also niche directories that focus on restricted categories, regions or languages or even specialized sectors. The best example for such niche directory that has a considerably large number of listed sites is the shopping directory. This directory has the listing of most retail e-commerce websites and if you happen to deal in a similar or related field, it would be wise for you to submit your website in the free directory submission program.

The most popular directories are Yahoo! Directory and Open Directory Project with extensive categorization and huge set of listings. Apart from allowing free directory submission these listings are also used by smaller directories and search engines. However, the quality of these directories and the enlisted databases are under debate as the search engines use the content without real integration. When you submit your web site to Google you are limiting the site’s visibility by your browsers. First of all Google is the most popular search engine and hence almost everyone wants to submit web pages to Google. This is good as well as bad because while submitting to a single popular search engine would be easier, there are more competition and the chances of your site being displayed is significantly lessened.

Understanding the basics of Google search can help to a great extent if you want to submit web address to Google. When the keywords are typed in, Google almost instantaneously presents a list of results for across the web. So how does Google trace web pages that match the query and on what basis does it determine what the order of search results should be? Google employs three key processes to deliver the search results. These are crawling, indexing and serving. The process of Googlebot discovering new updated pages after you submit web address to Google is called as the crawling process. Google actually has huge numbers of computers to track billions of pages on the net. Googlebot are the programs that fetch the information. These are also called as bots, robots or spiders and hence the terms “crawling”. When you submit web pages to Google, Googlebot employ the algorithmic process to determine which websites to crawl, how frequently, and how many pages to bring from each site. The crawling process starts with listing of web page URLs that had been generated in the earlier crawl processes. When you submit your web site to Google, they are augmented with the sitemap data you provide. As the crawler visits each of the websites submitted, it detects links on the individual web pages and adds it to its existing list of pages for crawling. While doing so it makes note of the new sites, changes if made to existing sites, remove dead links and update Google index accordingly. Always remember that Google does not accept any kind of payment to crawl your site frequently. The search side of business and the revenue generating business such as AdWords service are dealt separately.

After the Googlebot is done with processing the pages, it compiles it in a massive index based on the words it has come across noting the location of each page. Apart from this information processed also includes key content tags, attributes like title tags, ALT attributes etc. Though the Googlebot cannot process all content type, it does process a sizeable amount of it. For instance content of rich media files and dynamic pages cannot be processed. After you successfully submit your web site to Google, and it has been crawled and indexed; only serving results, which is the ultimatum of the search engine’s role, is remaining. When a browser keys in a query word or phrase, the machines reach the indexed pages and pull out matching pages and display the results based on what it thinks is most relevant to the browser. There are more than 200 factors that determine the relevancy of a site and the most important on is the Page Rank of any web page. Page Rank is actually the measure of importance of a web page which is based on the incoming links from other web pages. In other terms each link that comes into your site from other sites raises the Page Ranking of your site. Since not all links are the same, improvisations are made for a better user experience by identifying and removing spam links or other similar practices that can negatively affect search results. Following the webmaster guidelines provided by Google can help you follow some best practices to avoid common pitfall, improve site’s ranking, increase visibility and received better quantity and quality of clicks. There are many kinds of help Google can serve you with such as related searches, Google suggest features and spelling suggestions. These are designed to help webmasters save precious time in displaying related terms, popular queries and avoid common misspellings. These keywords that these features use are automatically generated by the web crawlers and indexed in search algorithms.

Things to consider:

  1. Search engines will penalize for duplicate or similar content of your own website or other webmasters.  This can be conquered and the webmaster assured that he is indeed a unique site on the World Wide Web through the use of a software tool called Similar Page Checker.  This can be found online and benefits the webmaster by checking all pages of his website and other webmasters on the Internet.
  2. The second thing to consider is that crawlers can be directed as to where to crawl and where not to crawl, thus finding relevant content and keywords and diminishing all irrelevant content, such as the about us page.  Finding and using a robot in the directory of your website will help ensure the crawlers recognize your site as you can direct them where to read.
0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *