ansarada.com
robots.txt

Robots Exclusion Standard data for ansarada.com

Resource Scan

Scan Details

Site Domain ansarada.com
Base Domain ansarada.com
Scan Status Ok
Last Scan2024-11-04T21:43:56+00:00
Next Scan 2024-11-18T21:43:56+00:00

Last Scan

Scanned2024-11-04T21:43:56+00:00
URL https://ansarada.com/robots.txt
Redirect https://www.ansarada.com/robots.txt
Redirect Domain www.ansarada.com
Redirect Base ansarada.com
Domain IPs 13.35.210.125, 13.35.210.29, 13.35.210.70, 13.35.210.99, 2600:9000:2078:1000:8:7a93:43c0:93a1, 2600:9000:2078:200:8:7a93:43c0:93a1, 2600:9000:2078:5c00:8:7a93:43c0:93a1, 2600:9000:2078:5e00:8:7a93:43c0:93a1, 2600:9000:2078:7000:8:7a93:43c0:93a1, 2600:9000:2078:7e00:8:7a93:43c0:93a1, 2600:9000:2078:9400:8:7a93:43c0:93a1, 2600:9000:2078:9800:8:7a93:43c0:93a1
Redirect IPs 104.16.248.114, 104.16.249.114
Response IP 104.16.249.114
Found Yes
Hash e4616ac273ff8d492db5cae42073b17b44d6604683e30d5cb4f07bda87eacc52
SimHash a510795bc7c5

Groups

doc

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

zao

Rule Path
Disallow /

fetch

Rule Path
Disallow /

httrack

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

linko

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

ubicrawler

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webzip

Rule Path
Disallow /

xenu

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

wget

Rule Path
Disallow /

grub-client

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

npbot

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

zombies

Rule Path
Disallow /brains

*

Rule Path
Disallow /?page=*
Disallow /admin
Disallow /admin/*

Other Records

Field Value
sitemap https://www.ansarada.com/sitemap.xml

Comments

  • A list of misbehaving crawlers.
  • originally from http://aardling.com/robots.txt
  • Some bots are known to be trouble, particularly those designed to copy entire sites.
  • Wget in its recursive mode is a frequent problem.
  • The 'grub' distributed client has been *very* poorly behaved.
  • Doesn't follow robots.txt anyway, but...
  • Hits many times per second, not acceptable.
  • http://www.nameprotect.com/botinfo.html
  • A capture bot, downloads gazillions of pages with no public benefit.
  • http://www.webreaper.net/
  • Obviously don't want _these_
  • These rules apply to everyone else.