kronoskaf.com
robots.txt

Robots Exclusion Standard data for kronoskaf.com

Resource Scan

Scan Details

Site Domain kronoskaf.com
Base Domain kronoskaf.com
Scan Status Ok
Last Scan2024-11-06T02:47:59+00:00
Next Scan 2024-12-06T02:47:59+00:00

Last Scan

Scanned2024-11-06T02:47:59+00:00
URL https://kronoskaf.com/robots.txt
Domain IPs 35.215.99.225
Response IP 35.215.99.225
Found Yes
Hash dedaba2e9438b4a7dbca07a7bb5bd0ff46d3c7bc4a3639f24e6b992cb86d7c67
SimHash 0632075946f7

Groups

israbot

Rule Path
Disallow

orthogaffe

Rule Path
Disallow

mauibot

Rule Path
Disallow /

serpstatbot

Rule Path
Disallow /

blexbot

Rule Path
Disallow /

sitebot

Rule Path
Disallow /

linguee

Rule Path
Disallow /

ubicrawler

Rule Path
Disallow /

cityreview

Rule Path
Disallow /

gurujibot

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

dotbot

Rule Path
Disallow /

charlotte

Rule Path
Disallow /

msnbot

Rule Path
Disallow /

speedy

Rule Path
Disallow /

scoutjet

Rule Path
Disallow /

discobot

Rule Path
Disallow /

accelobot

Rule Path
Disallow /

youdaobot

Rule Path
Disallow /

yodaobot

Rule Path
Disallow /

kalooga

Rule Path
Disallow /

sbider

Rule Path
Disallow /

dolphin

Rule Path
Disallow /

sindicebot

Rule Path
Disallow /

oozbot

Rule Path
Disallow /

naver

Rule Path
Disallow /

yeti

Rule Path
Disallow /

baiduspider

Rule Path
Disallow /

twengabot-discover

Rule Path
Disallow /

spbot

Rule Path
Disallow /

bixolabs

Rule Path
Disallow /

mlbot

Rule Path
Disallow /

mj12bot

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

wget

Rule Path
Disallow /

grub-client

Rule Path
Disallow /

ahrefsbot

Rule Path
Disallow /

robot

Rule Path
Disallow /

bot-

Rule Path
Disallow /

bot/

Rule Path
Disallow /

crawl

Rule Path
Disallow /

crawl

Rule Path
Disallow /

exabot

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

attributor.com

Rule Path
Disallow /

aisearchbot

Rule Path
Disallow /

spider

Rule Path
Disallow /

npbot

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

slurp

Rule Path
Disallow /

yandex

Rule Path
Disallow /

semrushbot

Rule Path
Disallow /

semrushbot-sa

Rule Path
Disallow /

msnbot

Rule Path
Disallow /trap/

Other Records

Field Value
crawl-delay 120

*

Rule Path
Disallow /trap/

Other Records

Field Value
crawl-delay 2

Comments

  • robots.txt for http://www.wikipedia.org/ and friends
  • Please note: There are a lot of pages on this site, and there are
  • some misbehaved spiders out there that go _way_ too fast. If you're
  • irresponsible, your access to the site may be blocked.
  • Wikipedia work bots:
  • Crawlers that are kind enough to obey, but which we'd rather not have
  • unless they're feeding search engines.
  • User-agent: ia_archiver
  • Disallow: /
  • Some bots are known to be trouble, particularly those designed to copy
  • entire sites. Please obey robots.txt.
  • Sorry, wget in its recursive mode is a frequent problem.
  • Please read the man page and use it properly; there is a
  • --wait option you can use to set the delay between hits,
  • for instance.
  • The 'grub' distributed client has been *very* poorly behaved.
  • Don't follow robots.txt anyway, but...
  • Hits many times per second, not acceptable
  • http://www.nameprotect.com/botinfo.html
  • A capture bot, downloads gazillions of pages with no public benefit
  • http://www.webreaper.net/
  • Sorry but you are crawling our website much too often
  • User-agent: Slurp
  • Crawl-delay: 10
  • User-agent: Yandex
  • Crawl-delay: 10
  • Friendly, low-speed bots are welcome viewing article pages, but not
  • dynamically-generated pages please.
  • Msnbot interprets the delay as seconds
  • not sure of the behavior of the "?" character so the following disallows have been put in comments
  • Disallow: /syw/index.php?title=Special:Randompage
  • Disallow: /syw/index.php?title=Special%3ARandompage
  • Disallow: /syw/index.php?title=Special:Search
  • Disallow: /syw/index.php?title=Special%3ASearch
  • Disallow: /syw/index.php?title=Special:Recentchanges
  • Disallow: /syw/index.php?title=Special%3ARecentchanges
  • *at least* 2 second please. preferably more :D

Warnings

  • 2 invalid lines.