jojowiki.com
robots.txt

Robots Exclusion Standard data for jojowiki.com

Resource Scan

Scan Details

Site Domain jojowiki.com
Base Domain jojowiki.com
Scan Status Ok
Last Scan2024-05-24T14:10:18+00:00
Next Scan 2024-05-31T14:10:18+00:00

Last Scan

Scanned2024-05-24T14:10:18+00:00
URL https://jojowiki.com/robots.txt
Domain IPs 104.21.5.226, 172.67.133.239, 2606:4700:3032::6815:5e2, 2606:4700:3034::ac43:85ef
Response IP 172.67.133.239
Found Yes
Hash f683591c0ece1624a4b90bce7d18d3c348e9be21d55b3c5b352ca08fad239c62
SimHash b6b85549edb3

Groups

*

Rule Path
Disallow /index.php?*&diff=
Disallow /index.php?*&oldid=
Disallow /index.php?*&action=
Disallow /index.php?*&mobileaction=
Disallow /*/edit
Disallow /*/rollback
Disallow /Translations%3A*
Disallow /*/en
Disallow /UserWiki%3A*
Disallow /api/
Disallow /Special%3A

ubicrawler

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

wget

Rule Path
Disallow /

grub-client

Rule Path
Disallow /

npbot

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

Other Records

Field Value
sitemap https://jojowiki.com/sitemap-index-jojowiki.xml

Comments

  • Crawlers that are kind enough to obey, but which we'd rather not have
  • unless they're feeding search engines.
  • Some bots are known to be trouble, particularly those designed to copy
  • entire sites. Please obey robots.txt.
  • wget in its recursive mode is a frequent problem.
  • The 'grub' distributed client has been poorly behaved.
  • Hits many times per second, not acceptable
  • http://www.nameprotect.com/botinfo.html
  • A capture bot, downloads pages with no public benefit
  • http://www.webreaper.net/