Within the month of September the website Style.com took a sharp drop in their visibility index, to be more precise a decline of more than 90%. Therefore the domain lost almost all of their keyword rankings on Google.co.uk and Google.com. But for which reason?
A rapidly decreasing visibility means a loss of good keyword rankings for the given domain name. How big is the impact exactly? Let’s find out by looking at the ranking changes in the given time frame where the domain visibility has decreased. Read Full Article
After having been in the talks for quite a while, Google finally unveiled its new, substantially expanded API interface a couple of weeks ago. It allows access to data from the Google Search Console (the late Google Webmaster Tools). Via this interface, it is now possible for the first time to access the data relating to one’s own domain automatically. Particularly, it is now possible to obtain data from the interesting area of search analysis. Over the past few weeks, we integrated this data into the Toolbox and learned a couple of things about Google Data. In this blog post, we want to tell you what to share what we discovered about the data, its uses and its limits. Read Full Article
From this week on, the SISTRIX Toolbox delivers data on the mobile ranking of countless search terms. As the first tool worldwide to do so, in addition to desktop rankings and visibility data, we offer smartphone data for all supported countries: Germany, Austria, Switzerland, the Netherlands, Poland, France, Italy, Spain, the UK as well as the US.
Google had recently publicized that, in more than ten markets, more searches are already made via mobile devices than over the traditional desktop browser – including the highly relevant search markets such as Japan and the US. We’ve taken this changing search behavior into account and now calculate, in parallel to desktop rankings, the same for the mobile Google index. Read Full Article
A week ago today, accompanied by much media interest, Google introduced the usability of websites on smartphones as a ranking factor for mobile searches. Unlike the well-known “penalty updates”, such as Penguin and Panda, it wasn’t an algorithm that could be armed and that directly had its full, negative effect. The effects of the new mobile ranking factor only reveal themselves after the Google bot has crawled a firm page and tested its mobile friendliness.
The dust has begun to settle: Google was busy last week and the first results of the new ranking factor are starting to come to light. Unlike as with the Panda/Penguin updates, we don’t want to publish a list of winners and losers on the blog. That’s because we, as the SEO branch, don’t firstly have to work out the causes with this update. Google made completely clear and transparent from the outset the criteria by which mobile-friendly sites will be ranked. There’s even a free test tool from Google to carry out this check right away and as often as you like. As an alternative, we want to show a few examples that are symptomatic of many sites that have lost or gained rankings through the mobile update. Read Full Article
Yesterday, we looked at the winners in last year’s Google-index, while today, I want to show you the losers for the same period of time. Just as with the winners, I put together a list of 50 Domains that saw a very strong percentage decrease in their Google SERP visibility. I tried to best rid the list of domains that won or lost their rankings through a domain-change, as long as they did not have an interesting story to them. Let’s rock:
The Penguin-update was surely one of the biggest SEO-topics in 2012. Cloaked behind the veil of a cute name, this update is another one of Google’s increased efforts to punish SEO-methods that are not Google Webmaster Guideline conform. If we look at the Top-50, we get the feeling that Google might have actually achieved their goal: many of the domains show a large decrease in visibility at the exact date of Penguin being rolled out. A closer look at the common cause for this filter shows us a collective trigger: massive unnatural linkbuilding.
‘The’ update-topic of 2011 still has a grip on us: Google is regularly rolling out new iterations and improvements for their Panda-algorithm, which increase the filter’s accuracy. The probability for sites that once got hit with panda to get hit again with one of the updates is relatively high, as we can see for many of the domains on this list.
It seems that in 2012, Google has not undergone a fundamental change when it comes to their relationship with price-comparisons that are not part of Google. With Preisroboter.de, Preis.de, Preisvergleich.org und Nextag.com we have 4 general and several specialized price-comparison sites in the Top-50 list. Preisroboter.de is the most notable of them all. At the end of 2008, the domain sported a VisibilityIndex score of around 400 points, 4years later they have just about 1.17 points left. Ever since 2009, we can see a steady decline in Visibility. Since the time that Google is officially communicating their updates (which we show you with our event-pins in the Toolbox), we have a decent ground to stand on when we make assumptions on the cause of visibility losses: for nearly each and every Panda-iteration we could see another decrease in visibility. This means that Google is continuously upgrading their algorithm and sites like Preisroboter.de seem to fall right into the crosshairs of what unwanted pages look like.
You would think that relocating from one domain to the next should be a routine step for both website-operators as well as Google: copy all the needed files, do a quick 301-redirect based on URLs, update the DNS-entries and all will be swell. This aside, we still see roadblocks along the way that people happily run into head first. A beacon for this in our list is the domain tradoria.de: after they got bought by Rakuten it looks as though Germany should now use rakuten.de. The domain move did not go as intended, as now, both domains are in the google-index and when we sum up their visibility, we are still at a noticeably disadvantage from where toradoria.de once stood.
Two domains on this list are not here on their own free will: both Neckermann.de as well as Schlecker.com declared bankruptcy last year. Google reacted quickly to this and demoted the visibility for both domains.
Now that the SEO Campixx in Berlin and the SMX Munich are over, April will give us a great opportunity to get the next SEO-regular’s table in Bonn on its way. The plan is to have a cozy get-together on Thursday, April 26th 2012. As always, the regular’s table will start at 7pm MEZ. Everyone interested in SEO is cordially invited to attend.
To sign-up, please use this form. We will send you all the necessary information about the location a few days prior to the event. Please remember to sign-up soon, as there is an attendance limit of 50 people.
We just got used to the idea that SEO does not only mean the mandatory listing of all meta-keywords, but that it also consists of linkbuilding and already the world has turned and there are new signals like user-behavior and social-media-data that take the high seat in the public’s perception. And just in case this wasn’t enough, Google has now created a smokescreen with their monthly blogposts, which regularly makes it harder to focus on whats really important. This also leads to interesting discussions in numerous blogs and networks. I want to use this posting to add some points to the discussion at large.
It might sometimes seem hard to remember with all the new features and verticals coming out all the time, but remember that Google is still a full-text-searchengine. I don’t want to go on and on about the basics, but I believe that they can be quite helpful in comprehending certain relationships. Google uses a webcrawler that goes through large parts of the public Internet and uses the words it finds there to fill its index. Now, when someone comes to Google for advice, Google first looks at their index to find the sites where the queried word is actually present. Depending on the query, this may be a list with a few million URLs. Only then, in a second step, does Google use its ominous algorithm, with which we deal with on a daily basis. Google will then sort the list with those URLs from step one with the help of a presumably huge list of rules and processes, just to show us the first 10 results.
To actually get picked for the algorithmic sorting, two preconditions have to be met: first, Google needs to have crawled the site and saved it in its index and then Google also needs to classify that site as relevant for the particular searchquery. The first condition can usually be achieved by using a solid page-layout: use an orderly information-structure and sensible internal linking to show the Googlecrawler the way. As far as the second condition is concerned, Google will use a rather simple indicator 99% of the time: the word (or a synonym) that is being searched for can be found on the page or withing the title of the page. Only once these conditions are met, do we get to the sorting and ranking of URLs. So how does user-behavior and social-network-signals fit into this system?
I am rather certain, that Google will only use these two signals during the last step, the sorting of results. And even there we see obvious difficulties, which is likely the reason why these two factors don’t take up a huge significance in the algorithm, at the moment. When we look at the user-behavior, you notice that the fun only starts once you put them in relation to the actual searchquery. Meaning a bounce rate for that one URL for that one keyword, instead of a global bounce-rate for the domain. If we take a look at the click-rates on the Google results pages, it quickly becomes apparent, that the click-rate takes a massive plunge once you are past the first page or results. This means that Google will not be able to get much meaningful user-data from there and the further we go towards the long-tail, the more inadequate the coverage becomes. By implication, this actually also means that this signal could be used to decide whether to rank a site on position 3 or 4 (ReRanking), while it will clearly be unable to help with the decision of whether the site belongs in the top-10 or top-1.000, at all.
When we look at the social-signals, we get a situation that’s even more deadlocked: at the moment, Google does not have a reliable source for this data. After they canceled their contract with Twitter for a complete subscription of all tweets, Twitter converted their system to replace all the URLs on publicly available websites with their own URL-shortener and setting them to ‘nofollow’. When it comes to the relationship between Facebook and Google, you couldn’t call it so friendly that Facebook would home-deliver the necessary data to their competitor. All that is left for a possible source is Google+. We have been gathering the signals for URLs for a while now and it is impossible to make out a trend that Google+ is actually being used more. A new Spiegel Online article, for example, has 1.608 mentions on Facebook, 227 tweets and a whopping 3 Google+ votes. Not exactly what you would call a solid foundation for an elementary part of an algorithm, that is responsible for 95% of the revenues for a publicly-traded company. So, how can we measure the significance a rankingsignal has on Google’s algorithm? When Google starts to publicly warns people about not manipulating these signals, then it is about time to start giving some thought to these signals …
It has been nearly two years since we started out with gathering ideas and first drafts and now, we can finally show the first fruits of our labor: the SISTRIX OpenLinkGraph private-beta went live this weekend and we have already gotten some valuable feedback from users. The determining factor for developing this tool was the realisation that only our own index, which we crawl and process ourselves, would be able to give us the results we would expect. Additionally, there is the fact that since Microsoft bought Yahoo, they decided to cease operations of their own crawling-ambitions. This means that the main trove of link-data has disappeared, which made developing our own index unavoidable.
What might sound simple at first glance, turned out to be hugely challenging: billions of websites need to be prioritised, crawled and processed. The database needs to spit out the results within seconds. Considering the number of servers supporting such a system, you have some of them break down on a daily basis, which makes it necessary to buffer their impact on the system. As one could imagine, this makes for enough complexity to make it lot of fun.
The result of our work is this platform, which makes it possible to deal with the current ideas and applications, as well as be prepared for future requirements: both the index-size as well as the evaluation-methods will not push the system to any discernible limits, which means we will be able to enjoy it for quite a while. Seeing how an introduction to the OpenLinkGraph would be far too long for one blogposting, I will take the next few days to preview the different parts of the system. For those of you coming to Dmexco this week, you can come by our booth D-69 and get a live preview of our tool as well as take home a beta invitation.
Hello, my name is Hanns Kronenberg and on September 1st, I became part of the SISTRIX-Team. Up until now, I did my blogging at seo-strategie.de, while in the future, I will surely make some blogposts here, too.
It was extremely tempting for me to be able to both work together with Johannes and on projects like the SISTRIX Toolbox and the OpenLinkGraph, enough that I happily traded in my independence as a SEO-Consultant to become part of the larger whole.
I am especially happy to start my work here with organizing the new SEO-Regulars’-Table in Bonn, among other things. The last one is already some month in the past and it really is high time to continue with this treasured tradition. The date we have chosen is the 29th of September 2011 and the Regulars’-Table will start as usual at seven o’ clock in the evening.
If you want to sign up, please use this form. We will then send you more detailed information a few days before the actual event. Also please sign-up early enough as there is a limit of 50 attendees.
Whenever I have some time on my hands, I like to dig through our Toolbox-data and see what kind of connections and summaries I can come up with. At the moment, we are experimenting with an alternative backlink-database, that is a little more extensive than the well known Yahoo-dataset. Based on this, I want to figure out which URLs get linked to the most (single URLs, not whole domains). The absolute amount of links pointing to a URL is not very accurate, seeing that footer-links, for example, are a large distorting factor. I went about it by looking at the domain-popularity, which is amount of different domains that link to the URLs, and sorting that list in decreasing order in excel or a comparable piece of software. Here we have the Top-50:
It is quite interesting that the first DE-domain only shows up at position 50. Therefore I decided to also evaluate the Top-50 URLs that are hosted on DE-domains: