capterra.jp
robots.txt

Robots Exclusion Standard data for capterra.jp

Resource Scan

Scan Details

Site Domain capterra.jp
Base Domain capterra.jp
Scan Status Failed
Failure StageFetching resource.
Failure ReasonServer returned a client error.
Last Scan2024-09-01T19:11:57+00:00
Next Scan 2024-11-30T19:11:57+00:00

Last Successful Scan

Scanned2024-01-13T15:45:20+00:00
URL https://capterra.jp/robots.txt
Redirect https://www.capterra.jp/robots.txt
Redirect Domain www.capterra.jp
Redirect Base capterra.jp
Domain IPs 172.66.41.7, 172.66.42.249, 2606:4700:3108::ac42:2907, 2606:4700:3108::ac42:2af9
Redirect IPs 172.66.41.7, 172.66.42.249, 2606:4700:3108::ac42:2907, 2606:4700:3108::ac42:2af9
Response IP 172.66.42.249
Found Yes
Hash cb283f670e1c1cda7a30f3f09251d12f39e2ff8f99552bf975b265dfbe38480e
SimHash a2b591a3ee61

Groups

msnbot

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 10

ahrefsbot

Rule Path
Disallow /

ubicrawler

Rule Path
Disallow /

bubing

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

wget

Rule Path
Disallow /

grub-client

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

npbot

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

psbot

Rule Path
Disallow /

exabot

Rule Path
Disallow /

speedy

Rule Path
Disallow /

dotbot

Rule Path
Disallow /

bloglines/3.1

Rule Path
Disallow /

jyxobot/1

Rule Path
Disallow /

cityreview

Rule Path
Disallow /

crazywebcrawler-spider

Rule Path
Disallow /

domain re-animator bot

Rule Path
Disallow /

semrushbot

Rule Path
Disallow /

semrushbot-sa

Rule Path
Disallow /

vegi

Rule Path
Disallow /

rogerbot

Rule Path
Disallow /

mauibot

Rule Path
Disallow /

linguee

Rule Path
Disallow /

petalbot

Rule Path
Disallow /

blexbot

Rule Path
Disallow /

yandex

Rule Path
Disallow /

yandexbot

Rule Path
Disallow /

seekportbot

Rule Path
Disallow /

*

Rule Path
Allow /*?vsn=d$
Allow /sitemap/*?page=
Allow /directory/*?page=
Allow /blog?page=
Disallow /*?*
Disallow /cdn-cgi/

Comments

  • Blocks crawlers that are kind enough to obey robots
  • allow digested assets
  • allow paginated sitemaps
  • allow paginated category pages
  • allow paginated blog homepage
  • pages with query strings