Website Content and the Googlebot

By Melissa Neilson
Aug 16,2016
951
Rate this item
(0 votes)

You certainly hope people will find your website or blog through a search.  But typically, you have to wait around for the Googlebot to crawl your website and add it to the Google index. Here are the basics of how website content is crawled and indexed:

The Googlebot is simply software that Google sends out to collect information about documents on the web to add to Google’s searchable index.

Crawling is the process where the Googlebot goes around from website to website, finding new and updated information to report back to Google. The Googlebot finds what to crawl using links.
 
Indexing is the processing of the information gathered by the Googlebot from its crawling. Once documents are processed, they are added to Google’s searchable index if they are determined to be quality content. During indexing, the Googlebot processes the words on a page and where those words are located.

 

The Googlebot finds new content on the web such as new websites, blogs, pages, etc. by starting with web pages captured during previous crawl processes and adds in sitemap data provided by webmasters. As it browses web pages previously crawled, it will detect links upon those pages to add to the list of pages to be crawled.

Last modified on Oct 18,2016
Published in Blogs

Your Comment