discourse.jupyter.org
robots.txt

Robots Exclusion Standard data for discourse.jupyter.org

Resource Scan

Scan Details

Site Domain discourse.jupyter.org
Base Domain jupyter.org
Scan Status Ok
Last Scan2025-08-01T00:55:27+00:00
Next Scan 2025-08-31T00:55:27+00:00

Last Scan

Scanned2025-08-01T00:55:27+00:00
URL https://discourse.jupyter.org/robots.txt
Domain IPs 216.66.8.43, 2602:fd3f:2:ff01::2b
Response IP 216.66.8.43
Found Yes
Hash 6000240792fc982c2c0c5ca9a43e83b412863899e4f6c1fa47c7a952e5815e72
SimHash a89d9dc567f1

Groups

mauibot

Rule Path
Disallow /

semrushbot

Rule Path
Disallow /

ahrefsbot

Rule Path
Disallow /

blexbot

Rule Path
Disallow /

seo spider

Rule Path
Disallow /

*

Rule Path
Disallow /admin/
Disallow /auth/
Disallow /assets/browser-update*.js
Disallow /email/
Disallow /session
Disallow /user-api-key
Disallow /*?api_key*
Disallow /*?*api_key*
Disallow /badges
Disallow /my
Disallow /search
Disallow /tag/*/l
Disallow /g
Disallow /t/*/*.rss
Disallow /c/*.rss

googlebot

Rule Path
Disallow /admin/
Disallow /auth/
Disallow /assets/browser-update*.js
Disallow /email/
Disallow /session
Disallow /user-api-key
Disallow /*?api_key*
Disallow /*?*api_key*

Other Records

Field Value
sitemap https://discourse.jupyter.org/sitemap.xml

Comments

  • See https://datatracker.ietf.org/doc/rfc9309 for documentation on how to use the robots.txt file
  • Google uses the same format as the standard above. More info at https://developers.google.com/search/docs/crawling-indexing/robots/robots_txt