Additional Integrations for the Search Console Data
As previously announced, we are continuing to add new features to the Search-Console data within the Toolbox. Today, we went live with the next set of features. We added a new bar to the top of many evaluations which will let you choose the data-source for each evaluation.
Today we are starting with the keyword-table at SEO -> Keywords.
Once you click on one of the three or four possible data-sources, the evaluations on the page will use this source. This bar will be the standardized way across the Toolbox of setting the data-source you want to evaluate. The possible sources are:
- Extended Data – available for all domains for the DE & ES country indexes – 17 million (DE) and 5 Million (ES) keywords, respectively, which are updated at least once a month. This source will already include the weekly, history data.
- Common Data – available for all domains on all country indexes – Around 1 million keywords per country. These keywords are updated once a week.
- Smartphone Data – available for all domains on all country indexes – The same keywords we check for our historic keyword index. The difference is that we collect the smartphone data through a smartphone browser.
- Search Console – available for your domains – All the data that Google makes available through the Search Console API for your domains. Every day Google will return a maximum amount of 5.000 entries. This data is now available within your Toolbox account.
Special characteristics of the Search Console data
The data that Google provides through the Search Console API has some particularities which makes it difficult to draw direct comparisons between the Search Console data and the known Toolbox data. To make sure we can counteract any possible we will now point out possible hazards.
The data refers to keyword, URL, devices and countries in combination
Google always shows impressions (“How often has a result been shown to users”), clicks (“How often has a user clicked on the result?”) and CTR (“How many percent of impressions have led to clicks?”) for the combination of keyword (searchphrase), URL, device (desktop, tablet or smartphone) as well as the country where the user searched from. If even one of these four values changes, Google will shows a new entry.
The results can be seen with brand-results, for example, where the brands rank on the first position. The main result, usually the home page of the site, will receive most of the clicks and will thereby often have CTRs exceeding 50%. The other URLs for this result below the main-URL are calculated separately and show up in its own row within the Search Console tables.
Google does not combine key-values based on the best results, like we usually do in the Toolbox. The number or rows can therefore not be directly compared between the Search Console and the Toolbox.
User behavior has an impact on the results of the evaluation
Google will only show data when a specific threshold of impressions is met. Sadly Google does not communicate what this threshold is. As a result, it is possible to measure obvious key-values, like clicks, very well within a certain period of time, while extended evaluations will be on statistically shaky ground.
Search Console data, for example, suggest that domains have a noticeably higher ranking on smartphones than for the desktop search. Here, however, you are taken in by a principle-induced measurement error in Google’s data: Smartphone users usually only take a look at the first page of Google’s results while desktop users often click through to deeper pages. This means that the user-data for smartphones is lacking for these additional results pages, so Google is unable to show rankings for them. Only the good rankings are measured. In comparison, the Toolbox always records the keywords with the same depth and will reliably monitor changes even on low ranking positions.
This principle-induced problem with the Search Console sadly does not only show up for different devices: even between navigational- and informational-searchqueries there are noticeable differences in the depth a user searches to. You should keep this problem in mind whenever you run an evaluation based on the Search Console data. You may run into jeopardy of arriving at the wrong conclusions based on an unfit data basis.
If you would like even more insights in to the Search Console API data, please see our blogpost “Ranking data from Google’s Search Console: Use Cases and Limits“