As the crawlers visit the websites, they use links on those sites to discover other pages
The crawling process begins with a list of web addresses from past crawls and sitemaps provided by website owners. As the crawlers visit the websites, they use links on those sites to discover other pages. The software pays special attention to new sites, changes to existing sites and dead links. Computer programs determine which sites to crawl, how often and how many pages to fetch from each site.
Before you search, web crawlers gather information from across hundreds of billions of webpages and organize it in the Search index.
The fundamentals of Search
We offer Search Console to give site owners granular choices about how Google crawls their site: they can provide detailed instructions about how to process pages on their sites, can request a recrawl or can opt out of crawling altogether using a file called “robots.txt”. Google never accepts payment to crawl a site more frequently — we provide the same tools to all websites to ensure the best possible results for our users.
Finding information by crawling
The web is like an ever-growing library with billions of books and no central filing system. We use software known as web crawlers to discover publicly available webpages. Crawlers look at webpages and follow links on those pages, much like you would if you were browsing content on the web. They go from link to link and bring data about those webpages back to Google’s servers.
Organizing information by indexing
When crawlers find a webpage, our systems render the content of the page, just as a browser does. We take note of key signals — from keywords to website freshness — and we keep track of it all in the Search index.
The Google Search index contains hundreds of billions of webpages and is well over 100,000,000 gigabytes in size. It’s like the index in the back of a book — with an entry for every word seen on every webpage we index. When we index a webpage, we add it to the entries for all of the words it contains.
With the Knowledge Graph, we’re continuing to go beyond keyword matching to better understand the people, places and things you care about. To do this, we not only organize information about webpages but other types of information too. Today, Google Search can help you search text from millions of books from major libraries, find travel times from your local public transit agency, or help you navigate data from public sources like the World Bank.
How Search Works
These processes lay the foundation — they're how we gather and organize information on the web so we can return the most useful results to you. Our index is well over 100,000,000 gigabytes, and we’ve spent over one million computing hours to build it. Learn more about the basics in this short video.
Finding information by crawling
We use software known as “web crawlers” to discover publicly available webpages. The most well-known crawler is called “Googlebot.” Crawlers look at webpages and follow links on those pages, much like you would if you were browsing content on the web. They go from link to link and bring data about those webpages back to Google’s servers.
The crawl process begins with a list of web addresses from past crawls and sitemaps provided by website owners. As our crawlers visit these websites, they look for links for other pages to visit. The software pays special attention to new sites, changes to existing sites and dead links.
Computer programs determine which sites to crawl, how often, and how many pages to fetch from each site. Google doesn't accept payment to crawl a site more frequently for our web search results. We care more about having the best possible results because in the long run that’s what’s best for users and, therefore, our business.
Choice for website owners
Most websites don’t need to set up restrictions for crawling, indexing or serving, so their pages are eligible to appear in search results without having to do any extra work. That said, site owners have many choices about how Google crawls and indexes their sites through Webmaster Tools and a file called “robots.txt”. With the robots.txt file, site owners can choose not to be crawled by Googlebot, or they can provide more specific instructions about how to process pages on their sites.
Site owners have granular choices and can choose how content is indexed on a page-by-page basis. For example, they can opt to have their pages appear without a snippet (the summary of the page shown below the title in search results) or a cached version (an alternate version stored on Google’s servers in case the live page is unavailable). Webmasters can also choose to integrate search into their own pages with Custom Search.
Organizing information by indexing
The web is like an ever-growing public library with billions of books and no central filing system. Google essentially gathers the pages during the crawl process and then creates an index, so we know exactly how to look things up. Much like the index in the back of a book, the Google index includes information about words and their locations. When you search, at the most basic level, our algorithms look up your search terms in the index to find the appropriate pages.
The search process gets much more complex from there. When you search for “dogs” you don’t want a page with the word “dogs” on it hundreds of times. You probably want pictures, videos or a list of breeds. Google’s indexing systems note many different aspects of pages, such as when they were published, whether they contain pictures and videos, and much more. With the Knowledge Graph, we’re continuing to go beyond keyword matching to better understand the people, places and things you care about.