A website can only be found on the Google Search when it has been incorporated into Google’s Index beforehand. To make sure that (almost) all websites available on the web can be found through the Google Search, the Google-Bot crawls (searches through) billions of websites on a daily basis in search of new and updated content.
Googlebot is Google’s web crawling bot (sometimes also called a “spider”). Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index.Google Search Console-Help
Webmasters can have a certain influence over the Google Bot and its crawling process, whereby they can decide which content on their website should be added to or removed from the Google Index.
- Understand Google’s crawling behavior: Crawling and Indexing for extensive websites
Video explanation by Matt Cutts / Google on the topic
How Search Works
The life span of a Google query is less then 1/2 second, and involves quite a few steps before you see the most relevant results. Here’s how it all works.