capterra.co.uk
robots.txt

Robots Exclusion Standard data for capterra.co.uk

Resource Scan

Scan Details

Site Domain capterra.co.uk
Base Domain capterra.co.uk
Scan Status Failed
Failure StageFetching resource.
Failure ReasonServer returned a client error.
Last Scan2024-05-14T19:21:31+00:00
Next Scan 2024-08-12T19:21:31+00:00

Last Successful Scan

Scanned2023-12-24T19:19:28+00:00
URL https://capterra.co.uk/robots.txt
Redirect https://www.capterra.co.uk/robots.txt
Redirect Domain www.capterra.co.uk
Redirect Base capterra.co.uk
Domain IPs 104.21.24.107, 172.67.218.85, 2606:4700:3031::6815:186b, 2606:4700:3036::ac43:da55
Redirect IPs 104.21.24.107, 172.67.218.85, 2606:4700:3031::6815:186b, 2606:4700:3036::ac43:da55
Response IP 104.21.24.107
Found Yes
Hash cb283f670e1c1cda7a30f3f09251d12f39e2ff8f99552bf975b265dfbe38480e
SimHash a2b591a3ee61

Groups

msnbot

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 10

ahrefsbot

Rule Path
Disallow /

ubicrawler

Rule Path
Disallow /

bubing

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

wget

Rule Path
Disallow /

grub-client

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

npbot

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

psbot

Rule Path
Disallow /

exabot

Rule Path
Disallow /

speedy

Rule Path
Disallow /

dotbot

Rule Path
Disallow /

bloglines/3.1

Rule Path
Disallow /

jyxobot/1

Rule Path
Disallow /

cityreview

Rule Path
Disallow /

crazywebcrawler-spider

Rule Path
Disallow /

domain re-animator bot

Rule Path
Disallow /

semrushbot

Rule Path
Disallow /

semrushbot-sa

Rule Path
Disallow /

vegi

Rule Path
Disallow /

rogerbot

Rule Path
Disallow /

mauibot

Rule Path
Disallow /

linguee

Rule Path
Disallow /

petalbot

Rule Path
Disallow /

blexbot

Rule Path
Disallow /

yandex

Rule Path
Disallow /

yandexbot

Rule Path
Disallow /

seekportbot

Rule Path
Disallow /

*

Rule Path
Allow /*?vsn=d$
Allow /sitemap/*?page=
Allow /directory/*?page=
Allow /blog?page=
Disallow /*?*
Disallow /cdn-cgi/

Comments

  • Blocks crawlers that are kind enough to obey robots
  • allow digested assets
  • allow paginated sitemaps
  • allow paginated category pages
  • allow paginated blog homepage
  • pages with query strings