festo.com
robots.txt

Robots Exclusion Standard data for festo.com

Resource Scan

Scan Details

Site Domain festo.com
Base Domain festo.com
Scan Status Failed
Failure StageFetching resource.
Failure ReasonServer returned a client error.
Last Scan2024-11-13T10:59:16+00:00
Next Scan 2024-11-20T10:59:16+00:00

Last Successful Scan

Scanned2024-10-29T10:38:40+00:00
URL https://festo.com/robots.txt
Domain IPs 141.130.250.196
Response IP 141.130.250.196
Found Yes
Hash 53e7e91b91de3b955442ef1ac9d56a918c11adfc529a6dbf018e1d9ff3b942ed
SimHash 1414175bedf0

Groups

*

Rule Path
Disallow /*/cart
Disallow /*/checkout
Disallow /*/my-account
Disallow /*/login/
Disallow /*/register/
Disallow /adfs/

cazoodlebot

Rule Path
Disallow /

dotbot/1.0

Rule Path
Disallow /

gigabot

Rule Path
Disallow /

mj12bot

Rule Path
Disallow /

mediapartners-google*

Rule Path
Disallow /

israbot

Rule Path
Disallow /

orthogaffe

Rule Path
Disallow /

ubicrawler

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

fast

Rule Path
Disallow /

npbot

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

semrushbot

Rule Path
Disallow /

Other Records

Field Value
sitemap https://www.festo.com/Sitemap.xml

Comments

  • For all robots
  • Block access to specific groups of pages
  • Allow search crawlers to discover the sitemap
  • Crawl-delay: 5 # 5 seconds between page requests
  • Block CazoodleBot as it does not present correct accept content headers
  • Block dotbot as it cannot parse base urls properly
  • Block Gigabot
  • http://mj12bot.com/
  • advertising-related bots:
  • Wikipedia work bots:
  • Crawlers that are kind enough to obey, but which we'd rather not have
  • unless they're feeding search engines.
  • Some bots are known to be trouble, particularly those designed to copy
  • entire sites. Please obey robots.txt.
  • Misbehaving: requests much too fast:
  • Hits many times per second, not acceptable
  • http://www.nameprotect.com/botinfo.html
  • A capture bot, downloads gazillions of pages with no public benefit
  • http://www.webreaper.net/
  • Misbehaving queries
  • http://www.semrush.com/bot.html

Warnings

  • `request-rate` is not a known field.
  • `visit-time` is not a known field.