All rankings gone: No more fashion news from

1. October 2015, 15:17

Within the month of September the website took a sharp drop in their visibility index, to be more precise a decline of more than 90%. Therefore the domain lost almost all of their keyword rankings on and But for which reason?

Visibility Index of

A rapidly decreasing visibility means a loss of good keyword rankings for the given domain name. How big is the impact exactly? Let’s find out by looking at the ranking changes in the given time frame where the domain visibility has decreased.  Read Full Article

Ranking data from Google’s Search Console: Use Cases and Limits

13. September 2015, 18:52

After having been in the talks for quite a while, Google finally unveiled its new, substantially expanded API interface a couple of weeks ago. It allows access to data from the Google Search Console (the late Google Webmaster Tools). Via this interface, it is now possible for the first time to access the data relating to one’s own domain automatically. Particularly, it is now possible to obtain data from the interesting area of search analysis. Over the past few weeks, we integrated this data into the Toolbox and learned a couple of things about Google Data. In this blog post, we want to tell you what to share what we discovered about the data, its uses and its limits. Read Full Article

Toolbox: Mobile data for all countries

21. July 2015, 18:41

From this week on, the SISTRIX Toolbox delivers data on the mobile ranking of countless search terms. As the first tool worldwide to do so, in addition to desktop rankings and visibility data, we offer smartphone data for all supported countries: Germany, Austria, Switzerland, the Netherlands, Poland, France, Italy, Spain, the UK as well as the US.

Google had recently publicized that, in more than ten markets, more searches are already made via mobile devices than over the traditional desktop browser – including the highly relevant search markets such as Japan and the US. We’ve taken this changing search behavior into account and now calculate, in parallel to desktop rankings, the same for the mobile Google index. Read Full Article

Google mobile update: initial findings

28. April 2015, 14:11

A week ago today, accompanied by much media interest, Google introduced the usability of websites on smartphones as a ranking factor for mobile searches. Unlike the well-known “penalty updates”, such as Penguin and Panda, it wasn’t an algorithm that could be armed and that directly had its full, negative effect. The effects of the new mobile ranking factor only reveal themselves after the Google bot has crawled a firm page and tested its mobile friendliness.

The dust has begun to settle: Google was busy last week and the first results of the new ranking factor are starting to come to light. Unlike as with the Panda/Penguin updates, we don’t want to publish a list of winners and losers on the blog. That’s because we, as the SEO branch, don’t firstly have to work out the causes with this update. Google made completely clear and transparent from the outset the criteria by which mobile-friendly sites will be ranked. There’s even a free test tool from Google to carry out this check right away and as often as you like. As an alternative, we want to show a few examples that are symptomatic of many sites that have lost or gained rankings through the mobile update. Read Full Article

IndexWatch 2012: Losers

2. January 2013, 09:05

Yesterday, we looked at the winners in last year’s Google-index, while today, I want to show you the losers for the same period of time. Just as with the winners, I put together a list of 50 Domains that saw a very strong percentage decrease in their Google SERP visibility. I tried to best rid the list of domains that won or lost their rankings through a domain-change, as long as they did not have an interesting story to them. Let’s rock:

# Domain Change
1 -100% Domaininfo
2 -100% Domaininfo
3 -99% Domaininfo
4 -99% Domaininfo
5 -98% Domaininfo
6 -97% Domaininfo
7 dsl– -95% Domaininfo
8 -95% Domaininfo
9 -95% Domaininfo
10 -95% Domaininfo
11 -95% Domaininfo
12 -94% Domaininfo
13 -94% Domaininfo
14 -93% Domaininfo
15 -92% Domaininfo
16 -92% Domaininfo
17 -92% Domaininfo
18 -91% Domaininfo
19 -91% Domaininfo
20 -89% Domaininfo
21 -89% Domaininfo
22 -89% Domaininfo
23 -89% Domaininfo
24 -89% Domaininfo
25 -88% Domaininfo
26 -88% Domaininfo
27 -87% Domaininfo
28 -86% Domaininfo
29 -86% Domaininfo
30 -85% Domaininfo
31 -85% Domaininfo
32 -85% Domaininfo
33 -85% Domaininfo
34 -85% Domaininfo
35 -85% Domaininfo
36 -85% Domaininfo
37 -84% Domaininfo
38 -83% Domaininfo
39 -83% Domaininfo
40 -82% Domaininfo
41 -82% Domaininfo
42 -81% Domaininfo
43 -81% Domaininfo
44 -81% Domaininfo
45 -80% Domaininfo
46 -80% Domaininfo
47 -80% Domaininfo
48 -79% Domaininfo
49 -79% Domaininfo
50 -77% Domaininfo

The Penguin-update was surely one of the biggest SEO-topics in 2012. Cloaked behind the veil of a cute name, this update is another one of Google’s increased efforts to punish SEO-methods that are not Google Webmaster Guideline conform. If we look at the Top-50, we get the feeling that Google might have actually achieved their goal: many of the domains show a large decrease in visibility at the exact date of Penguin being rolled out. A closer look at the common cause for this filter shows us a collective trigger: massive unnatural linkbuilding.

‘The’ update-topic of 2011 still has a grip on us: Google is regularly rolling out new iterations and improvements for their Panda-algorithm, which increase the filter’s accuracy. The probability for sites that once got hit with panda to get hit again with one of the updates is relatively high, as we can see for many of the domains on this list.

Price-comparison sites
It seems that in 2012, Google has not undergone a fundamental change when it comes to their relationship with price-comparisons that are not part of Google. With,, und we have 4 general and several specialized price-comparison sites in the Top-50 list. is the most notable of them all. At the end of 2008, the domain sported a VisibilityIndex score of around 400 points, 4years later they have just about 1.17 points left. Ever since 2009, we can see a steady decline in Visibility. Since the time that Google is officially communicating their updates (which we show you with our event-pins in the Toolbox), we have a decent ground to stand on when we make assumptions on the cause of visibility losses: for nearly each and every Panda-iteration we could see another decrease in visibility. This means that Google is continuously upgrading their algorithm and sites like seem to fall right into the crosshairs of what unwanted pages look like.

You would think that relocating from one domain to the next should be a routine step for both website-operators as well as Google: copy all the needed files, do a quick 301-redirect based on URLs, update the DNS-entries and all will be swell. This aside, we still see roadblocks along the way that people happily run into head first. A beacon for this in our list is the domain after they got bought by Rakuten it looks as though Germany should now use The domain move did not go as intended, as now, both domains are in the google-index and when we sum up their visibility, we are still at a noticeably disadvantage from where once stood.

Two domains on this list are not here on their own free will: both as well as declared bankruptcy last year. Google reacted quickly to this and demoted the visibility for both domains.

SEO-Regular’s Table Bonn on 04.26.2012

11. April 2012, 07:34

Now that the SEO Campixx in Berlin and the SMX Munich are over, April will give us a great opportunity to get the next SEO-regular’s table in Bonn on its way. The plan is to have a cozy get-together on Thursday, April 26th 2012. As always, the regular’s table will start at 7pm MEZ. Everyone interested in SEO is cordially invited to attend.

To sign-up, please use this form. We will send you all the necessary information about the location a few days prior to the event. Please remember to sign-up soon, as there is an attendance limit of 50 people.

Brave new Signal-World

29. February 2012, 08:46

We just got used to the idea that SEO does not only mean the mandatory listing of all meta-keywords, but that it also consists of linkbuilding and already the world has turned and there are new signals like user-behavior and social-media-data that take the high seat in the public’s perception. And just in case this wasn’t enough, Google has now created a smokescreen with their monthly blogposts, which regularly makes it harder to focus on whats really important. This also leads to interesting discussions in numerous blogs and networks. I want to use this posting to add some points to the discussion at large.

It might sometimes seem hard to remember with all the new features and verticals coming out all the time, but remember that Google is still a full-text-searchengine. I don’t want to go on and on about the basics, but I believe that they can be quite helpful in comprehending certain relationships. Google uses a webcrawler that goes through large parts of the public Internet and uses the words it finds there to fill its index. Now, when someone comes to Google for advice, Google first looks at their index to find the sites where the queried word is actually present. Depending on the query, this may be a list with a few million URLs. Only then, in a second step, does Google use its ominous algorithm, with which we deal with on a daily basis. Google will then sort the list with those URLs from step one with the help of a presumably huge list of rules and processes, just to show us the first 10 results.

To actually get picked for the algorithmic sorting, two preconditions have to be met: first, Google needs to have crawled the site and saved it in its index and then Google also needs to classify that site as relevant for the particular searchquery. The first condition can usually be achieved by using a solid page-layout: use an orderly information-structure and sensible internal linking to show the Googlecrawler the way. As far as the second condition is concerned, Google will use a rather simple indicator 99% of the time: the word (or a synonym) that is being searched for can be found on the page or withing the title of the page. Only once these conditions are met, do we get to the sorting and ranking of URLs. So how does user-behavior and social-network-signals fit into this system?

I am rather certain, that Google will only use these two signals during the last step, the sorting of results. And even there we see obvious difficulties, which is likely the reason why these two factors don’t take up a huge significance in the algorithm, at the moment. When we look at the user-behavior, you notice that the fun only starts once you put them in relation to the actual searchquery. Meaning a bounce rate for that one URL for that one keyword, instead of a global bounce-rate for the domain. If we take a look at the click-rates on the Google results pages, it quickly becomes apparent, that the click-rate takes a massive plunge once you are past the first page or results. This means that Google will not be able to get much meaningful user-data from there and the further we go towards the long-tail, the more inadequate the coverage becomes. By implication, this actually also means that this signal could be used to decide whether to rank a site on position 3 or 4 (ReRanking), while it will clearly be unable to help with the decision of whether the site belongs in the top-10 or top-1.000, at all.

When we look at the social-signals, we get a situation that’s even more deadlocked: at the moment, Google does not have a reliable source for this data. After they canceled their contract with Twitter for a complete subscription of all tweets, Twitter converted their system to replace all the URLs on publicly available websites with their own URL-shortener and setting them to ‘nofollow’. When it comes to the relationship between Facebook and Google, you couldn’t call it so friendly that Facebook would home-deliver the necessary data to their competitor. All that is left for a possible source is Google+. We have been gathering the signals for URLs for a while now and it is impossible to make out a trend that Google+ is actually being used more. A new Spiegel Online article, for example, has 1.608 mentions on Facebook, 227 tweets and a whopping 3 Google+ votes. Not exactly what you would call a solid foundation for an elementary part of an algorithm, that is responsible for 95% of the revenues for a publicly-traded company. So, how can we measure the significance a rankingsignal has on Google’s algorithm? When Google starts to publicly warns people about not manipulating these signals, then it is about time to start giving some thought to these signals …

OpenLinkGraph: the SISTRIX Link-Index

19. September 2011, 08:29

SISTRIX OpenLinkGraphIt has been nearly two years since we started out with gathering ideas and first drafts and now, we can finally show the first fruits of our labor: the SISTRIX OpenLinkGraph private-beta went live this weekend and we have already gotten some valuable feedback from users. The determining factor for developing this tool was the realisation that only our own index, which we crawl and process ourselves, would be able to give us the results we would expect. Additionally, there is the fact that since Microsoft bought Yahoo, they decided to cease operations of their own crawling-ambitions. This means that the main trove of link-data has disappeared, which made developing our own index unavoidable.

What might sound simple at first glance, turned out to be hugely challenging: billions of websites need to be prioritised, crawled and processed. The database needs to spit out the results within seconds. Considering the number of servers supporting such a system, you have some of them break down on a daily basis, which makes it necessary to buffer their impact on the system. As one could imagine, this makes for enough complexity to make it lot of fun.

The result of our work is this platform, which makes it possible to deal with the current ideas and applications, as well as be prepared for future requirements: both the index-size as well as the evaluation-methods will not push the system to any discernible limits, which means we will be able to enjoy it for quite a while. Seeing how an introduction to the OpenLinkGraph would be far too long for one blogposting, I will take the next few days to preview the different parts of the system. For those of you coming to Dmexco this week, you can come by our booth D-69 and get a live preview of our tool as well as take home a beta invitation.

SEO-Regulars’-Table Bonn on 09.29.2011

8. September 2011, 09:24

Hello, my name is Hanns Kronenberg and on September 1st, I became part of the SISTRIX-Team. Up until now, I did my blogging at, while in the future, I will surely make some blogposts here, too.

It was extremely tempting for me to be able to both work together with Johannes and on projects like the SISTRIX Toolbox and the OpenLinkGraph, enough that I happily traded in my independence as a SEO-Consultant to become part of the larger whole.

I am especially happy to start my work here with organizing the new SEO-Regulars’-Table in Bonn, among other things. The last one is already some month in the past and it really is high time to continue with this treasured tradition. The date we have chosen is the 29th of September 2011 and the Regulars’-Table will start as usual at seven o’ clock in the evening.

If you want to sign up, please use this form. We will then send you more detailed information a few days before the actual event. Also please sign-up early enough as there is a limit of 50 attendees.

Most linked-to URLs

5. April 2011, 12:51

Whenever I have some time on my hands, I like to dig through our Toolbox-data and see what kind of connections and summaries I can come up with. At the moment, we are experimenting with an alternative backlink-database, that is a little more extensive than the well known Yahoo-dataset. Based on this, I want to figure out which URLs get linked to the most (single URLs, not whole domains). The absolute amount of links pointing to a URL is not very accurate, seeing that footer-links, for example, are a large distorting factor. I went about it by looking at the domain-popularity, which is amount of different domains that link to the URLs, and sorting that list in decreasing order in excel or a comparable piece of software. Here we have the Top-50:

# URL Domain-Pop
1 1.467.293
2 646.710
3 518.762
4… 486.291
5 451.014
6 444.413
7 344.182
8… 322.019
9 284.465
10 265.504
11 245.556
12 231.476
13 192.641
14 192.446
15… 191.661
16 188.830
17 182.869
18…. 172.283
19 158.660
20 151.182
21 145.443
22 142.021
23 140.245
24 113.961
25 110.991
26 108.507
27 108.094
28 106.160
29 105.778
30 105.058
31 104.468
32… 103.147
33 102.369
34 100.993
35 98.204
36 95.990
37… 95.707
38 94.607
39 91.460
40 88.160
41 87.977
42 87.042
43 86.001
44 85.088
45 84.708
46 84.418
47 83.239
48 82.924
49 82.862
50 80.578

It is quite interesting that the first DE-domain only shows up at position 50. Therefore I decided to also evaluate the Top-50 URLs that are hosted on DE-domains:

Top-50 (DE-Domains)
# URL Domain-Pop
1 80.578
2… 60.486
3 57.355
4 49.901
5 49.552
6 35.589
7 31.015
8 29.361
9… 26.965
10 26.087
11 22.325
12 22.145
13… 22.005
14 21.514
15… 20.852
16 20.708
17 20.567
18 20.514
19 19.355
20… 19.279
21 18.965
22 18.063
23 17.977
24 17.211
25 16.676
26 16.660
27 16.237
28 15.765
29 15.237
30… 15.060
31 14.655
32 14.334
33 14.242
34 13.729
35 13.626
36… 13.184
37 13.157
38 12.859
39 12.841
40 12.263
41 11.997
42 11.942
43 11.697
44 11.690
45 11.195
46 10.964
47 10.867
48 10.747
49 10.715
50 10.601