getapp.de
robots.txt

Robots Exclusion Standard data for getapp.de

Resource Scan

Scan Details

Site Domain getapp.de
Base Domain getapp.de
Scan Status Failed
Failure StageFetching resource.
Failure ReasonServer returned a client error.
Last Scan2025-02-11T20:22:25+00:00
Next Scan 2025-05-12T20:22:25+00:00

Last Successful Scan

Scanned2023-12-27T20:19:59+00:00
URL https://getapp.de/robots.txt
Redirect https://www.getapp.de/robots.txt
Redirect Domain www.getapp.de
Redirect Base getapp.de
Domain IPs 104.21.65.247, 172.67.195.207, 2606:4700:3031::6815:41f7, 2606:4700:3035::ac43:c3cf
Redirect IPs 104.21.65.247, 172.67.195.207, 2606:4700:3031::6815:41f7, 2606:4700:3035::ac43:c3cf
Response IP 172.67.195.207
Found Yes
Hash cb283f670e1c1cda7a30f3f09251d12f39e2ff8f99552bf975b265dfbe38480e
SimHash a2b591a3ee61

Groups

msnbot

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 10

ahrefsbot

Rule Path
Disallow /

ubicrawler

Rule Path
Disallow /

bubing

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

wget

Rule Path
Disallow /

grub-client

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

npbot

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

psbot

Rule Path
Disallow /

exabot

Rule Path
Disallow /

speedy

Rule Path
Disallow /

dotbot

Rule Path
Disallow /

bloglines/3.1

Rule Path
Disallow /

jyxobot/1

Rule Path
Disallow /

cityreview

Rule Path
Disallow /

crazywebcrawler-spider

Rule Path
Disallow /

domain re-animator bot

Rule Path
Disallow /

semrushbot

Rule Path
Disallow /

semrushbot-sa

Rule Path
Disallow /

vegi

Rule Path
Disallow /

rogerbot

Rule Path
Disallow /

mauibot

Rule Path
Disallow /

linguee

Rule Path
Disallow /

petalbot

Rule Path
Disallow /

blexbot

Rule Path
Disallow /

yandex

Rule Path
Disallow /

yandexbot

Rule Path
Disallow /

seekportbot

Rule Path
Disallow /

*

Rule Path
Allow /*?vsn=d$
Allow /sitemap/*?page=
Allow /directory/*?page=
Allow /blog?page=
Disallow /*?*
Disallow /cdn-cgi/

Comments

  • Blocks crawlers that are kind enough to obey robots
  • allow digested assets
  • allow paginated sitemaps
  • allow paginated category pages
  • allow paginated blog homepage
  • pages with query strings