A website can only be found on the Google Search after it has been incorporated into Google’s Index. To make sure that (almost) all websites available on the web can be found through the Google Search, the Google-Bot crawls (searches through) billions of websites on a daily basis in search of new and updated content.
Googlebot is Google’s web crawling bot (sometimes also called a “spider”). Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index.Google Search Console-Help
Webmasters can have a certain influence over the Google Bot and its crawling process, whereby they can decide which content on their website should be added to or removed from the Google Index.
- Understand Google’s crawling behavior: Crawling and Indexing for extensive websites
To make it easier for the Google Bot to crawl and understand your own website it is important to practice good OnPage-Optimization as well as using a solid page structure (Sitemaps) and the internal link-structure in mind.
Video explanation by Matt Cutts / Google on the topic
How Search Works
The life span of a Google query is less then 1/2 second, and involves quite a few steps before you see the most relevant results. Find out more about how it works in the detailed articles listed in this section.