starfinderwiki.com
robots.txt

Robots Exclusion Standard data for starfinderwiki.com

Resource Scan

Scan Details

Site Domain starfinderwiki.com
Base Domain starfinderwiki.com
Scan Status Failed
Failure StageFetching resource.
Failure ReasonServer returned a client error.
Last Scan2026-01-18T19:17:47+00:00
Next Scan 2026-03-19T19:17:47+00:00

Last Successful Scan

Scanned2025-10-26T02:34:26+00:00
URL https://starfinderwiki.com/robots.txt
Domain IPs 104.21.48.72, 172.67.181.115, 2606:4700:3030::6815:3048, 2606:4700:3031::ac43:b573
Response IP 104.21.48.72
Found Yes
Hash df290b7e364ffaea977c2b6f7cd066e50658e9541f5c6603508aad00ff72035f
SimHash 67184b53c4c5

Groups

mj12bot

Rule Path
Disallow /

mediapartners-google*

Rule Path
Disallow /

israbot

Rule Path
Disallow

orthogaffe

Rule Path
Disallow

ubicrawler

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

fast

Rule Path
Disallow /

wget

Rule Path
Disallow /

grub-client

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

npbot

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

amazonbot

Rule Path
Disallow /

applebot-extended

Rule Path
Disallow /

bytespider

Rule Path
Disallow /

ccbot

Rule Path
Disallow /

claudebot

Rule Path
Disallow /

google-extended

Rule Path
Disallow /

gptbot

Rule Path
Disallow /

meta-externalagent

Rule Path
Disallow /

*

Rule Path
Allow /w/load.php?
Allow /api/rest_v1/?doc
Disallow /w/
Disallow /api/
Disallow /trap/
Disallow /wiki/Special%3A
Disallow /w/index.php?*title=Special%3A
Disallow /w/index.php?*Special%3ARecentChangesLinked
Disallow /w/index.php?action=edit

Other Records

Field Value
crawl-delay 1

addsearchbot
ai2bot
ai2bot-dolma
aihitbot
amazonbot
andibot
anthropic-ai
applebot
applebot-extended
awario
bedrockbot
bigsur.ai
brightbot 1.0
bytespider
ccbot
chatgpt agent
chatgpt-user
claude-searchbot
claude-user
claude-web
claudebot
cloudvertexbot
cohere-ai
cohere-training-data-crawler
cotoyogi
crawlspace
datenbank crawler
deepseekbot
devin
diffbot
duckassistbot
echobot bot
echoboxbot
facebookbot
facebookexternalhit
factset_spyderbot
firecrawlagent
friendlycrawler
gemini-deep-research
google-cloudvertexbot
google-extended
google-firebase
googleagent-mariner
googleother
googleother-image
googleother-video
gptbot
iaskspider/2.0
icc-crawler
imagesiftbot
img2dataset
isscyberriskcrawler
kangaroo bot
linerbot
meta-externalagent
meta-externalagent
meta-externalfetcher
meta-externalfetcher
meta-webindexer
mistralai-user
mistralai-user/1.0
mycentralaiscraperbot
netestate imprint crawler
novaact
oai-searchbot
omgili
omgilibot
openai
operator
pangubot
panscient
panscient.com
perplexity-user
perplexitybot
petalbot
phindbot
poseidon research crawler
qualifiedbot
quillbot
quillbot.com
sbintuitionsbot
scrapy
semrushbot-ocob
semrushbot-swa
shapbot
sidetrade indexer bot
terracotta
thinkbot
tiktokspider
timpibot
velenpublicwebcrawler
wardbot
webzio-extended
wpbot
yak
yandexadditional
yandexadditionalbot
youbot

Rule Path
Disallow /

Comments

  • robots.txt for http://www.wikipedia.org/ and friends adapted to our wikis
  • Please note: There are a lot of pages on this site, and there are
  • some misbehaved spiders out there that go _way_ too fast. If you're
  • irresponsible, your access to the site may be blocked.
  • Observed spamming large amounts of https://en.wikipedia.org/?curid=NNNNNN
  • and ignoring 429 ratelimit responses, claims to respect robots:
  • http://mj12bot.com/
  • advertising-related bots:
  • Wikipedia work bots:
  • Crawlers that are kind enough to obey, but which we'd rather not have
  • unless they're feeding search engines.
  • Some bots are known to be trouble, particularly those designed to copy
  • entire sites. Please obey robots.txt.
  • Misbehaving: requests much too fast:
  • Sorry, wget in its recursive mode is a frequent problem.
  • Please read the man page and use it properly; there is a
  • --wait option you can use to set the delay between hits,
  • for instance.
  • The 'grub' distributed client has been *very* poorly behaved.
  • Doesn't follow robots.txt anyway, but...
  • Hits many times per second, not acceptable
  • http://www.nameprotect.com/botinfo.html
  • A capture bot, downloads gazillions of pages with no public benefit
  • http://www.webreaper.net/
  • some AI crawlers
  • Friendly, low-speed bots are welcome viewing article pages, but not
  • dynamically-generated pages please.
  • Inktomi's "Slurp" can read a minimum delay between hits; if your
  • bot supports such a thing using the 'Crawl-delay' or another
  • instruction, please let us know.
  • There is a special exception for API mobileview to allow dynamic
  • mobile web & app views to load section content.
  • These views aren't HTTP-cached but use parser cache aggressively
  • and don't expose special: pages etc.
  • Another exception is for REST API documentation, located at
  • /api/rest_v1/?doc.