[ad_1]
Ever questioned what would occur in the event you prevented Google from crawling your web site for a number of weeks? Technical search engine optimization skilled Kristina Azarenko has printed the outcomes of such an experiment.
Six shocking issues that occurred. What occurred when Googlebot couldn’t crawl Azarenko’s website from Oct 5 to Nov. 7:
- Favicon was faraway from Google Search outcomes.
- Video search outcomes took an enormous hit and nonetheless haven’t recovered post-experiment.
- Positions remained comparatively steady, besides had been barely extra unstable in Canada.
- Visitors solely noticed solely a slight lower.
- A rise in reported listed pages in Google Search Console. Why? Pages with noindex meta robots tags ended up being listed as a result of Google couldn’t crawl the location to see these tags.
- A number of alerts in GSC (e.g., “Listed, although blocked by robots.txt”, “Blocked by robots.txt”).
Why we care. Testing is an important aspect of search engine optimization. All adjustments (intentional or unintentional) can impression your rankings and site visitors and backside line, so it’s good to grasp how Google may probably react. Additionally, most corporations aren’t in a position to try this type of an experiment, so that is good data to know.
The experiment. You’ll be able to learn all about it in Unexpected Results of My Google Crawling Experiment.
One other related experiment. Patrick Stox of Ahrefs has additionally shared outcomes of blocking two high-ranking pages with robots.txt for 5 months. The impression on rating was minimal, however the pages misplaced all their featured snippets.
New on Search Engine Land
[ad_2]