haskellstack.org
robots.txt

Robots Exclusion Standard data for haskellstack.org

Resource Scan

Scan Details

Site Domain haskellstack.org
Base Domain haskellstack.org
Scan Status Ok
Last Scan7/24/2025, 7:45:57 PM
Next Scan 8/23/2025, 7:45:57 PM

Last Scan

Scanned7/24/2025, 7:45:57 PM
URL https://haskellstack.org/robots.txt
Redirect https://docs.haskellstack.org/robots.txt
Redirect Domain docs.haskellstack.org
Redirect Base haskellstack.org
Domain IPs 104.21.9.196, 172.67.131.5, 2606:4700:3033::ac43:8305, 2606:4700:3034::6815:9c4
Redirect IPs 104.16.253.120, 104.16.254.120, 2606:4700::6810:fd78, 2606:4700::6810:fe78
Response IP 104.16.254.120
Found Yes
Hash 4e7e715a2fc3f85bcfed73dd230f1ed49f16804974ce6a97be2113b7cae80b9e
SimHash 2a053d01a7e6

Groups

*

Rule Path Comment
Disallow /en/mkdocs-test/ Hidden version

Other Records

Field Value
sitemap https://docs.haskellstack.org/sitemap.xml

Comments

  • This robots.txt file is autogenerated by Read the Docs.
  • It controls the crawling and indexing of your documentation by search engines.
  • You can learn more about robots.txt, including how to customize it, in our documentation:
  • * Our documentation on Robots.txt: https://docs.readthedocs.com/platform/stable/reference/robots.html
  • * Our guide about SEO techniques: https://docs.readthedocs.com/platform/stable/guides/technical-docs-seo-guide.html