libreplanet.org
robots.txt

Robots Exclusion Standard data for libreplanet.org

Resource Scan

Scan Details

Site Domain libreplanet.org
Base Domain libreplanet.org
Scan Status Ok
Last Scan2025-10-24T23:39:53+00:00
Next Scan 2025-11-23T23:39:53+00:00

Last Scan

Scanned2025-10-24T23:39:53+00:00
URL https://libreplanet.org/robots.txt
Domain IPs 2001:470:142:5::242, 209.51.188.242
Response IP 209.51.188.242
Found Yes
Hash bb7ae1d7df12726668cf6bd834601253d1569aceeb944fdc7604cc4fb42ccc68
SimHash 361c5959ccc7

Groups

*

Rule Path
Allow /w/load.php?
Disallow /w/
Disallow /api/
Disallow /trap/
Disallow /wiki/Special%3A
Disallow /wiki/Spezial%3A
Disallow /wiki/Spesial%3A
Disallow /wiki/Special%3A
Disallow /wiki/Spezial%3A
Disallow /wiki/Spesial%3A

Other Records

Field Value
crawl-delay 4

ia_archiver

Rule Path
Allow /*%26action%3Draw

amazonbot

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 10

dataforseobot

Rule Path
Disallow /

mj12bot

Rule Path
Disallow /

mediapartners-google*

Rule Path
Disallow /

israbot

Rule Path
Disallow

orthogaffe

Rule Path
Disallow

ubicrawler

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

fast

Rule Path
Disallow /

wget

Rule Path
Disallow /

grub-client

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

npbot

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

semanticscholarbot

Rule Path
Disallow /

dataforseobot

Rule Path
Disallow /

blexbot

Rule Path
Disallow /

ahrefsbot

Rule Path
Disallow /

barkrowler

Rule Path
Disallow /

screaming frog seo spider

Rule Path
Disallow /

zoombot

Rule Path
Disallow /

magpie-crawler

Rule Path
Disallow /

dotbot

Rule Path
Disallow /

rogerbot

Rule Path
Disallow /

semrushbot

Rule Path
Disallow /

siteauditbot

Rule Path
Disallow /

semrushbot-ba

Rule Path
Disallow /

semrushbot-si

Rule Path
Disallow /

semrushbot-swa

Rule Path
Disallow /

splitsignalbot

Rule Path
Disallow /

semrushbot-ocob

Rule Path
Disallow /

jamesbot

Rule Path
Disallow /

oncrawl

Rule Path
Disallow /

awariorssbot

Rule Path
Disallow /

awariosmartbot

Rule Path
Disallow /

awariobot

Rule Path
Disallow /

serpstatbot

Rule Path
Disallow /

netestate ne crawler

Rule Path
Disallow /

panscient.com

Rule Path
Disallow /

Comments

  • From https://en.wikipedia.org/robots.txt
  • Friendly, low-speed bots are welcome viewing article pages, but not
  • dynamically-generated pages please.
  • Allow the Internet Archiver to index action=raw and thereby store the raw wikitext of pages
  • Unclear that this bot would be helpful for us. Lots of traffic.
  • robots.txt for http://www.wikipedia.org/ and friends
  • Please note: There are a lot of pages on this site, and there are
  • some misbehaved spiders out there that go _way_ too fast. If you're
  • irresponsible, your access to the site may be blocked.
  • Observed spamming large amounts of https://en.wikipedia.org/?curid=NNNNNN
  • and ignoring 429 ratelimit responses, claims to respect robots:
  • http://mj12bot.com/
  • advertising-related bots:
  • Wikipedia work bots:
  • Crawlers that are kind enough to obey, but which we'd rather not have
  • unless they're feeding search engines.
  • Some bots are known to be trouble, particularly those designed to copy
  • entire sites. Please obey robots.txt.
  • Misbehaving: requests much too fast:
  • Sorry, wget in its recursive mode is a frequent problem.
  • Please read the man page and use it properly; there is a
  • --wait option you can use to set the delay between hits,
  • for instance.
  • The 'grub' distributed client has been *very* poorly behaved.
  • Doesn't follow robots.txt anyway, but...
  • Hits many times per second, not acceptable
  • http://www.nameprotect.com/botinfo.html
  • A capture bot, downloads gazillions of pages with no public benefit
  • http://www.webreaper.net/
  • FSF additions
  • This site does not have white pages.
  • DataForSeo - SEO
  • webmeup - SEO
  • Ahrefs - SEO
  • babbar - SEO
  • Screamingfrog - SEO
  • Seozoom - SEO
  • Brandwatch - SEO
  • Begin Moz - SEO
  • Not to be confused with Mozilla.
  • End Moz - SEO
  • Begin Semrush - SEO
  • End Semrush - SEO
  • cognitiveSEO - SEO
  • oncrawl - SEO
  • BEGIN Awario - Marketing
  • END Awario - Marketing
  • SERPSTAT - SEO
  • website-datenbank.de - Search engine?
  • Ignores crawl-delay and does not help us.