What are Featured Snippets?

Over the years, Google has been adding more and more information to the search results, aside from the Adwords and organic search results, in order to enhance the search experience. The Featured Snippets are a format which is supposed to provide users with a concise, direct answer to their questions […]

Shelf space optimisation on Google

What are the two things that a search result page on Google and a supermarket shelf have in common? They serve a specific purpose. They only have limited space available (constraint) In both cases, the constraint is the limiting factor for the performance of the entire system. The supermarket might […]

How can I quickly get a new page into Google’s index?

Google is already pretty quick when it comes to finding and indexing new pages (URLs). This process may be a little quicker or take a little longer depending on the ‘popularity’ of the website in question. Any webmaster using the Google Search Console (GSC) can expedite this and therefore manually […]

Domain move risk and Google rankings

Theoretically, changing your domain name does not have to cause any damage to your rankings on Google – that is, if you strictly adhere to Google’s instructions. In the real word, we can find examples which make us suspect that a domain move may actually encompass a serious SEO risk. […]

Should the domain’s homepage rank first for a site:-query?

The site:-query ( displays the number of indexed URLs that a domain has within Google’s index. The question of whether the order in which these results are shown has any importance, is a regular topic for discussion. The last time we heard from Google about this topic was more than […]

The consequences of negative user-signals on Google’s rankings

The homepage of the German domain offers a great example for how much influence user signals can have on Google’s rankings for a specific keyword. The domain is an ancient exact-match domain for the keyword “hotel bonn” and the two words “hotel” and “Bonn” (the former German capital) also […]

Crawling and Indexing for extensive websites

As soon as websites exceed the size of a typical private homepage, there are a number of new challenges that arise. One of them is that the existing content belongs in the Google index, as complete and up to date as possible. While this may sound easy, very large websites […]

What is the difference between SEO, Ads and Universal Search?

To better explain the differences between the three categories SEO, Ads and Universal Search, we best take a look at one of Google’s search result pages. Overview of a Google search engine result page (SERP) The typical search result page, also called search engine result page or “SERP” for short, […]

How important is a sitemap for the indexing of my site?

A sitemap is a list of all the pages (URLs) on a website. This file is used by search engines as an overview of all available content as well as a way to (better) understand the structure of the website. If you create an XML-sitemap for your website, you can […]

If I have an XML-sitemap, do I also have to provide an HTML-sitemap?

It is indeed recommended to provide both an XML-sitemap as well as an HTML-sitemap. The XML-sitemap serves as a structured table-of-contents for search engines and helps them find new and deeply nested pages. Whereby the HTML-sitemap mainly serves the users, as it increases the usability of your website. But thanks […]

What do I have to keep in mind when creating a video-sitemap?

A video-sitemap is the basis for providing Google with the necessary information about the content of your videos. The video sitemap is an XML-file and contains your videos’ meta-data, like title, description, length, and source. Using a video-sitemap ensures that your video content can be recognised and indexed much faster […]

How do I safely move my website to a new domain name?

Changing the domain name does not have to be the cause for any (long lasting) negative SEO effects. As long as you plan the move carefully and do it correctly, you do not have to fear ranking losses. In order to move your domain, you should tell Google that you […]

Which meta-elements (meta-tags) are relevant for SEO and which are not?

We talk about SEO-relevant meta-elements if a search-engine crawler reads, processes and takes them into consideration for calculating the search result rankings, as well as for indexing purposes.

Setting up a 301-redirect from the non-www to the www. domain-name

You should use a 301 Redirect to indicate the preferred domainname, in order to avoid problems with how the Google-Bot indexes your website and make sure no internal Duplicate Content arises due to canonicalization issues. Please also see: My website can be reached with and without the www. Is this […]

When does it make sense to use the meta-robots values NOINDEX and FOLLOW together?

Only search engine crawlers will interpret the values within the meta-element “robots”. In most cases, the values “INDEX” and “FOLLOW” are used to instruct the crawler to include the present page in their index and follow all links on the page. Your page may be added to the index and […]

Is Duplicate Content responsible for the strong fluctuations in the indexed pages of my website?

If you notice continuous fluctuations in the number of the indexed pages over a longer period of time then the reason for, or a sign thereof, could very well be Duplicate Content. In order to evaluate these fluctuations and figure out whether there is a Duplicate Content problem, you will […]

Is it possible to identify Duplicate Content through the Visibility Index history?

Yes, a potential Duplicate Content problem can have a visible impact on the SISTRIX Visibility Index, as it can lead to a negative impact for a large part of the domain’s rankings. It is quite possible that the SISTRIX Visbilityindex will show the same ups and downs which can be […]

Using and correctly implementing Content-Syndication

Definition of what Content-Syndication actually means Content-Syndication is using specific (media) content multiple times. These can be articles, interviews, blogposts, studies as well as any other kind of text can be (media) content as well as infographics, videos, podcasts, etc. Anyone offering Content-Syndication gives their content, of which they are […]

How long does a Google Penalty last?

As there are different types of Google Penalties there is no one-size-fits-all answer to this question. Though we can generally differentiate between the two types of penalties. The algorithmic penalty A website may be penalised due to an algorithmic filter, such as the Google Panda Update or the Google Penguin […]

What kinds of Google Penalties are there and what are the differences?

A Penalty is a measure by Google to punish websites that are in violation of their Webmaster Guidelines. A penalty can have different levels of severity in which it affects a website. You can differentiate between punishments on a keyword-, URL- or directory level as well as a sidewide (affecting […]

What is a Reconsideration Request?

A Reconsideration Request is a request for another manual review of a website that has been punished by a manual penalty. If a domain is affected by an algorithmic penalty, sending a Reconsideration Request will have no effect. Depending on the scale and severity of the violation against the Google […]

What do I have to pay attention to in a Reconsideration Request?

If you want to submit a Reconsideration Request because of a manual penalty by Google, you should take a number of things into consideration. When submitting a request for another website review, you should be able to put forward a detailed report of what you did. This report should contain […]

Google Penalties

A penalty is the sanctioning of a website by Google. The reason for punishing a website is because of a non-compliance with the Google Webmaster Guidelines. Google penalties come in two flavours, manual- as well as algorithmic-penalties. What kinds of a Google Penalty are there and what are the differences? […]

Why am I getting different values for indexed pages in the Google search, the GSC and SISTRIX?

Sometimes it may happen that the numbers you get from a Google site:-query, the Google Search Console (GSC) and the SISTRIX Toolbox do not match. You are not able to directly compare the numbers you get from a site:-query on Google and the Google Search Console, as the later are […]

How can I remove a URL on my website from the Google Index?

To remove a specific URL (for example, the page of your own website from Google’s index, there are two options available: Option #1: Use the robots meta-element with NOINDEX Add the meta-element robots to the source code of the page which is not supposed to appear in the index […]

How can I keep the Google-Bot from crawling my website?

Whatever your reasons may be for blocking Google from crawling all or parts of your domain, you can do so within the so-called robots.txt file. Blocking the Google-Bot using the robots.txt The robots.txt is a simple text file with the name “robots”. It has to be placed in the root-directory […]

How can I find out how many pages of my domain are indexed by Google?

Google provides two simple options to determine the number of indexed pages of your domain. The total number of indexed pages may vary greatly from the total number of pages you actually have live on a domain. Option #1: the Google site:-query By using a simple search query with the […]

Why does the amount of indexed pages fluctuate so much?

The current course of indexed pages in the SISTRIX Toolbox shows a noticeable up and down movement with a large degree of variation. The amount of indexed pages in the SISTRIX Toolbox Within the SISTRIX Toolbox, we monitor the amount of indexed pages daily, but will only create a new […]

Google-Index, Google-Bot & Crawler

A website can only be found on the Google Search when it has been incorporated into Google’s Index beforehand. To make sure that (almost) all websites available on the web can be found through the Google Search, the Google-Bot crawls (searches through) billions of websites on a daily basis in […]

Can the Google-Bot fill out and crawl forms?

The Google-Bot will generally try to fill out and post forms on a page, in order to discover new content and URLs that are not directly viewable otherwise. Google will decide on an individual basis if a FORM-Element on a page is considered to be useful and then try to […]

Why does a Google search with the quotation mark operator sometimes deliver more results than the same search without it?

A Google search can be made with different search operators. The quotation mark-operator: [ “keyword” ] can be used to search or filter for a specific word or sentence. In this case we talk about an “exact match.” When you put a word or phrase in quotes, the results will […]

Why does a URL that is blocked through robots.txt show up in the search results?

If you use the robots.txt to block access to a directory or specific page for search engine crawlers, this page/directory will not be crawled or indexed. You can block the directory “a-directory” and the page “a-page.html” for webcrawlers with the following addition to the sites robots.txt: User-agent: * Disallow: /a-directory/ […]

My website can be reached with and without the www. Is this harmful?

To minimise Duplicate Content problems and to ensure a better indexing by the Google-Bot, Google recommends using a preferred domain name. That means you have to decide which Hostname should be preferred for your domain: without the www. (, with the www. hostname ( or even by a totally different […]

How often does Google carry out Algorithm Updates?

Google constantly tries to maintain and increase the quality of their search index. In order to accomplish this goal, Google launches algorithm updates every now and then, on all search markets, worldwide. These so-called Google Updates can have a varying degree of influence on the particular ranking of a website. […]

Can PDF-files of my HTML-pages lead to a Duplicate Content problem

From a technical standpoint it would be a case of internal Duplicate Content if the same content can be accessed through both a HTML-file as well as a PDF-document on your website. It would be external Duplicate Content if, for example, you offered a downloadable PDF version of the user-manual […]

Duplicate Content

Duplicate Content means that content is accessible through multiple URLs. This so-called Duplicate Content should be categorically avoided. Each piece of content on a website must only be accessible through one single URL. Otherwise, Google is put on the spot and has to decide which URL to display in the […]