The mobile/desktop scale is tipping towards mobile for more and more Google search markets. And the number of users who do their shopping on their mobile devices are also steadily increasing. No wonder that Google is irked by the fact that they do not have such a nicely controlled ecosystem as Facebook or Amazon. In order to change this they invented AMP, which they market as wanting to make mobile loading times faster for users. These are trimmed down HTML-pages which are delivered directly from Google’s servers and they make it possible for Google to simulate something akin to their own ecosystem. Such pages look like this in the mobile SERPs:
In August 2015, Parsely.com – a content analytics platform – released an Authority Report where they suggest that Facebook has surpassed Google as a top referring source to publishers. But is this true? Many European publishers would disagree. Anyway, one and half years and a number of report updates later, it is quite surprising that, in many American and European institutions and sectors (universities, website owners, publishers, e.g.) this statement has been accepted as the only truth, which it is not.
No wonder, if you analyse how this statement has been received: The Nytimes.com writes: “Facebook eclipsed Google for the share of referral traffic to publishers“. Fortune.com begin their article with “Facebook is no longer just vying with Google but has overtaken it by a significant amount.“ And even Marketing Land writes “Facebook has landed the latest punch in the heavyweight battle with Google for referral traffic supremacy“.
Today, I would like to take a closer look at this study and offer additional numbers.
A few weeks ago, I explained how using a directory structure which uses dates within the URLs can kill your content in Google. The examples TheGuardian.com, HuffingtonPost.co.uk and TechCrunch.com all show us that there is still much room for improvement and an opportunity for more traffic. Using dates in the URLs are a symptom of a less than optimal information architecture.
Publishers or news websites have to prepare the pieces of information on their site in such a way that both Google and regular users can find them. Google users do not think in dates (they are not using news libraries!), they use search queries on search engines and that is the way how things work today. Something that e-commerce websites take for granted should also be something that news websites find self evident.
Could you imagine an e-commerce website using dates as part of their information architecture? No, of course not. Shops, for example, are made up of detail pages and categories. With the categories being an integral part of the site, because often users may not know exactly what they are looking for.
2016 has, once again, been an incredibly thrilling and eventful year for us. Some of our highlights are the introduction of the daily Visibility Index for Germany – with a daily update of the ranking data for all domains – the development of our Amazon SEO Tools, the acquisition of SEOlytics, the integration of the Majestic link data as an additional link source within the Toolbox and Google’s “Mobile First” announcement.
We are now on the last leg of 2016 and Christmas markets everywhere are doing their part in enriching our air with the smells of roasted almonds, warm waffles and mulled wine. This means that our annual SISTRIX Christmas charity should not be missing.
In accordance with tradition, we again set up our Christmas Ferris wheel. With every click you start our Christmas wheel and we donate 1 Euro for a great cause.
This year our donations will go to Transparency International. They made it their mission to fight corruption all over the world, in order to turn it into a more just place. This non-governmental organization shines a light on loopholes in laws, institutions or systems and instigates reforms.
SISTRIX shares the values of transparency, fairness and factual arguments with Transparency International. Help us reach our donation goal of 5.000 Euro by starting our Ferris wheel with your click.
We wish everyone on earth a merry Christmas!
Google just took off the gloves: yesterday was the day that my Chrome Browser started showing not only the green encryption-lock for HTTPS pages, but also explicitly telling me that the page is “Secure”. Google has already made us aware in a blogpost that they will start showing a “Not secure” lable for non-encrypted pages, in the future.
While this advance is definitely the right idea, I would still like to put it through a reality-check. For this, I analyzed our data on how the share of pages transfered via HTTPS changed within the Google SERPs, over the last few years, for the United Kingdom, Germany and Spain. Here we see the data for the UK:
We can nicely see how the share of encrypted pages started off slow but is now managing to go up at an ever increasing rate. The blue line takes a look at the percentage of HTTPS pages within the Top-100 search results and the green line looks at the same percentage for the Top-10 search results. The values went from less than 10 percent in mid-2015, to more than 30 percent of all pages within the Top-100 and even close to 40 percent within the Top-10, by today.
If you are wondering, the huge jump in the Top-10 in mid-2015 is proudly presented to you by Wikipedia.org. A wonderful example of the abiding theme of correlation and causation. If we take a look at the same evaluation without Widipedia.org, we will see a smoother graph, but both direction and intensity are still very pronounced:
An ancient Indian Minister, Sissa ibn Dahir (Sessa), invented the board game of Chess in order to direct the attention of his ruler, Shihram, to the problems in his country. The ruler expressed his enthusiasm for the game and Sessa was allowed to decided how he wanted to be compensated. Sessa wanted just rice, but in the following distribution: 1 grain of rice on the first square on the board, 2 grains on the second square, 4 grains on the third, 8 on the fourth, and so on until the square number 64. The ruler laughed it off as a small prize for a brilliant invention. What the ruler did not consider was that in the end this would add up to more rice than exists world-wide (even today): 18,446,744,073,709,551,615.
This exercise can be used to demonstrate how quickly exponential sequences grow. Something very similar happens if you use filter parameters within the URLs on your domains. If you have 1 product in 10 different colours, in 10 different sizes and with 10 different prices, you can suddenly have 1,000 new URLs, for the same product, with no additional value. If you allow Google to index URLs with low quality content, they will negatively affect your rankings for the entire domain.
Let’s look at an example.
The domain Spotify.com has massively lost visibility worldwide on Google. The country with the largest loss is France (-82,1%) and the country with the smallest impact are the Netherlands (-44.5%). In the United Kingdom Spotify lost 48% of all their keywords (from 92,588 to 59,102) and 71% of their Visibility on Google.co.uk. Read Full Article
About two weeks ago Google officially announced the rollout of Penguin 4.0 and that it could now be found in the Google results. Now that the dust has settled and that first results can be found in the SERPs, I would like to summarize our present knowledge in this blog post. First of all, two points on Google’s blog post, which can be looked at as halfway secured:
Penguin 4.0 is here to stay
Up until now, Penguin mode has been as easy as it has been annoying: Google assembled all data needed and ran the algorithm for a penguin filter in irregular intervals. Results showed up in one go and pages concerned received their Google penalty in the rankings. Google has changed this procedure with Penguin 4.0. The impact is now visible once Google crawled an URL. It is still unclear if this means your own site or if the Googlebot needs to crawl the outgoing-link sites again.
The effect of Penguin is now more targeted
Delicate evaluation hasn’t been the strongest suit of the Penguin filter: either a project has been impacted in its entirety or not. Google now claims, that with Penguin 4.0 the effect of the filter will be targeted better and more precisely: on particular URLs, maybe even particular rankings. Read Full Article
Last week, Google announced at the Pubcon in Las Vegas that the Mobile Index will take the place of the Desktop Index as the main index. This means that from now on Google will not check if there exists a Mobile Version to the Desktop Version but conversely, if there exists a Desktop Version to the Mobile Version. The Mobile Index and Mobile will thus increase massively in value.
Even if most of the revenue is made on the Desktop Version of a website, Mobile SEO will become crucial since henceforward it will directly influence the crawling of a Desktop Version and therefore influence the Desktop rankings. Read Full Article
The main reason to avoid dates within your directory structure is explained on page number 8 of the Google’s Search Engine Optimization Starter Guide:
Simple-to-understand URLs will convey content information easily
Creating descriptive categories and filenames for the documents on your website can not only help you keep your site better organized, but it could also lead to better crawling of your documents by search engines. Also, it can create easier, “friendlier” URLs for those that want to link to your content. Visitors may be intimidated by extremely long and cryptic URLs that contain few recognizable words.
If the dates constitute a really relevant piece of information for the user, I would keep them. For all other cases I would advice against them, as in doing so, you are likely going to kill your content on Google. Read Full Article