The Crawling-Log in the Optimizer shows you in real time how your website is being analysed from our crawler. Thanks to this logfile you’ll get exact insights on the crawling status of your Optimizer project. Video walk-through below.
In the evaluation “Crawling-Log” the Optimizer-crawler gives you insights on the current ongoing project-crawling. If your crawling is not active, you’ll see the data from the last crawl.
The first four boxes show you an overview of the most important information. If the crawler is currently active, the information on this page will be updated every second.
- Crawler-Status: Current status of the Optimizer-crawler for this project. The possible values are:
- Queued: The crawler has been started. It is currently assigned to a free crawl-server. This usually last just a couple of seconds, but it could also last longer for special settings (e.g. very high number of URLs, fixed IP-Address).
- Crawling: The project crawling is currently ongoing. The data on the page are updated automatically every second.
- Parsing: The crawling has ended. Some final evaluations and analyses are still being completed, so that you can use the crawling-data in the Optimizer.
- Finished: At the moment the crawler for this project is not running. Click on the green button “Restart Crawler” to start a crawling anytime you want.
- Runtime: Complete runtime of the current crawling from the beginning until the end of the analysis.
- Crawled URLs: Number of crawled URLs. This includes all URLs found by the crawler, as well as HTML pages, redirects, resources and external links.
- URL Limit: Maximal number of URLs which can be crawled in this project. If the Optimizer-crawler reaches this limit, it will stop the crawling and evaluate the data gathered until that moment. You can increase the URL limit in the project settings.
In this table you can find more information about the scope of the current ongoing crawling. The options in detail:
- Total URLs crawled: Total number of URLs crawled by the crawler in this project. All HTTP-requests are counted here. This number results from the sum of the three following values.
- Total HTML URLs crawled: Number of HTML-pages in this project, which have been found by the crawler. Here we count the number of HTML-documents which are within the project-scope and have the correct HTML data-type.
- External URLs crawled: Number of external pages which have been crawled. The Optimizer-crawler can check whether the links on your website are still correct.
- Queued URLs: Number of URLs that the Optimizer will process, but which have not been crawled still.
- Blocked URLs: Number of URLs that the crawler couldn’t detect. This happens when a URL is blocked by robots.txt. Here we count only the HTML-pages of the project.
- Failed URLs: Number of URLs which return a 400 or 500 Status Code. Here we count only the HTML-pages of the project.
- Indexable URLs: Number of crawled URLs which can be indexed from Google. These URLs are neither blocked by robots.txt, nor prevented from being indexed by the instructions of the page.
- Bytes transfered: Data-volume transfered with this crawling.
This evaluation shows you the number of crawled URLs every minute. This allows you to see the time history of your Optimizer-crawl at a glance. Pages with a 200 status code are coloured green, redirects (status code 300) yellow and error pages (status code 400 or 500) red.
Crawler Live Logfile
In this table you see the last crawled URLs of your Optimizer-project. Near the crawling time you can see the status code of the URL, the loadtime in seconds and the actual URL.
After finishing the crawling you can search and sort all URLs inside your project.