loftwall.com
robots.txt

Robots Exclusion Standard data for loftwall.com

Resource Scan

Scan Details

Site Domain loftwall.com
Base Domain loftwall.com
Scan Status Ok
Last Scan2025-09-21T03:56:47+00:00
Next Scan 2025-10-21T03:56:47+00:00

Last Scan

Scanned2025-09-21T03:56:47+00:00
URL https://loftwall.com/robots.txt
Domain IPs 104.26.12.189, 104.26.13.189, 172.67.71.238, 2606:4700:20::681a:cbd, 2606:4700:20::681a:dbd, 2606:4700:20::ac43:47ee
Response IP 104.26.12.189
Found Yes
Hash f8fe2516f5f38dc0227c177769fab8dd7cc8ac1391d2f6529e2b9fd3df8d5be2
SimHash b6101159c6f7

Groups

*

Rule Path
Disallow *?s=
Disallow *?replytocom=
Disallow /cgi-bin
Disallow /wp-admin
Disallow /trackback
Disallow /comments
Disallow */trackback
Disallow */comments
Disallow /category/*/*
Disallow /tag/
Disallow /2006/
Disallow /2007/
Disallow /2008/
Disallow /2009/
Disallow /2010/
Disallow /2011/
Disallow /2012/
Disallow /2013/
Disallow /2014/
Disallow /2015/
Disallow /beta/
Disallow /rss.xml

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

israbot

Rule Path
Disallow

orthogaffe

Rule Path
Disallow

ubicrawler

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

fast

Rule Path
Disallow /

wget

Rule Path
Disallow /

grub-client

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

npbot

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

Comments

  • WIKIPEDIA'S LIST OF BAD BOTS
  • Wikipedia work bots:
  • Crawlers that are kind enough to obey, but which we'd rather not have
  • unless they're feeding search engines.
  • Some bots are known to be trouble, particularly those designed to copy
  • entire sites. Please obey robots.txt.
  • Misbehaving: requests much too fast:
  • Sorry, wget in its recursive mode is a frequent problem.
  • Please read the man page and use it properly; there is a
  • --wait option you can use to set the delay between hits,
  • for instance.
  • The 'grub' distributed client has been *very* poorly behaved.
  • Doesn't follow robots.txt anyway, but...
  • Hits many times per second, not acceptable
  • http://www.nameprotect.com/botinfo.html
  • A capture bot, downloads gazillions of pages with no public benefit
  • http://www.webreaper.net/