docs.haskellstack.org
robots.txt

Robots Exclusion Standard data for docs.haskellstack.org

Resource Scan

Scan Details

Site Domain docs.haskellstack.org
Base Domain haskellstack.org
Scan Status Ok
Last Scan2025-06-21T21:24:33+00:00
Next Scan 2025-07-21T21:24:33+00:00

Last Scan

Scanned2025-06-21T21:24:33+00:00
URL https://docs.haskellstack.org/robots.txt
Domain IPs 104.16.253.120, 104.16.254.120, 2606:4700::6810:fd78, 2606:4700::6810:fe78
Response IP 104.16.254.120
Found Yes
Hash 4e7e715a2fc3f85bcfed73dd230f1ed49f16804974ce6a97be2113b7cae80b9e
SimHash 2a053d01a7e6

Groups

*

Rule Path Comment
Disallow /en/mkdocs-test/ Hidden version

Other Records

Field Value
sitemap https://docs.haskellstack.org/sitemap.xml

Comments

  • This robots.txt file is autogenerated by Read the Docs.
  • It controls the crawling and indexing of your documentation by search engines.
  • You can learn more about robots.txt, including how to customize it, in our documentation:
  • * Our documentation on Robots.txt: https://docs.readthedocs.com/platform/stable/reference/robots.html
  • * Our guide about SEO techniques: https://docs.readthedocs.com/platform/stable/guides/technical-docs-seo-guide.html