commonjs.org
robots.txt

Robots Exclusion Standard data for commonjs.org

Resource Scan

Scan Details

Site Domain commonjs.org
Base Domain commonjs.org
Scan Status Ok
Last Scan2025-05-17T00:21:14+00:00
Next Scan 2025-06-16T00:21:14+00:00

Last Scan

Scanned2025-05-17T00:21:14+00:00
URL https://commonjs.org/robots.txt
Redirect https://wiki.commonjs.org/robots.txt
Redirect Domain wiki.commonjs.org
Redirect Base commonjs.org
Domain IPs 104.21.2.112, 172.67.129.30, 2606:4700:3034::6815:270, 2606:4700:3034::ac43:811e
Redirect IPs 104.21.2.112, 172.67.129.30, 2606:4700:3034::6815:270, 2606:4700:3034::ac43:811e
Response IP 104.21.2.112
Found Yes
Hash e34b6d9d0dae0533c609fdbd718a0de6e8e044cf9eafad7ecd4808fa2dad3c23
SimHash 95154159e5f7

Groups

mediapartners-google*

Rule Path
Disallow /

israbot

Rule Path
Disallow

orthogaffe

Rule Path
Disallow

ubicrawler

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

grub-client

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

npbot

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

sistrix

Rule Path
Disallow /

*

Rule Path
Disallow /index.php
Disallow /api.php
Disallow /load.php
Disallow /wiki/Special%3ARandom
Disallow /wiki/Special%3ARandom
Disallow /wiki/Special%3ASearch
Disallow /wiki/Special%3ASearch
Disallow /wiki/Special%3APreferences
Disallow /wiki/Special%3AContributions
Disallow /wiki/Special%3AListGroupRights
Disallow /wiki/Special%3AListUsers
Disallow /wiki/Special%3AAbuseLog
Disallow /wiki/Special%3ABrowseData/
Disallow /wiki/Special%3ARecentChangesLinked
Disallow /wiki/Special%3AWhatLinksHere

Comments

  • advertising-related bots:
  • Wikipedia work bots:
  • Crawlers that are kind enough to obey, but which we'd rather not have
  • unless they're feeding search engines.
  • Some bots are known to be trouble, particularly those designed to copy
  • entire sites. Please obey robots.txt.
  • The 'grub' distributed client has been *very* poorly behaved.
  • Doesn't follow robots.txt anyway, but...
  • Hits many times per second, not acceptable
  • http://www.nameprotect.com/botinfo.html
  • A capture bot, downloads gazillions of pages with no public benefit
  • http://www.webreaper.net/
  • Don't allow the wayback-maschine to index user-pages
  • User-agent: ia_archiver
  • Disallow: /wiki/User
  • Disallow: /wiki/Benutzer
  • Floods requests at an unacceptably fast rate and doesn't appear to have a good purpose
  • Friendly, low-speed bots are welcome viewing article pages, but not
  • dynamically-generated pages please.
  • Inktomi's "Slurp" can read a minimum delay between hits; if your
  • bot supports such a thing using the 'Crawl-delay' or another
  • instruction, please let us know.