dso.org
robots.txt

Robots Exclusion Standard data for dso.org

Resource Scan

Scan Details

Site Domain dso.org
Base Domain dso.org
Scan Status Failed
Failure StageFetching resource.
Failure ReasonServer returned a client error.
Last Scan2025-09-06T12:38:18+00:00
Next Scan 2025-12-05T12:38:18+00:00

Last Successful Scan

Scanned2024-10-19T11:02:22+00:00
URL https://dso.org/robots.txt
Redirect https://www.dso.org:443/robots.txt
Redirect Domain www.dso.org
Redirect Base dso.org
Domain IPs 34.232.124.39
Redirect IPs 3.165.82.11, 3.165.82.113, 3.165.82.119, 3.165.82.61
Response IP 3.165.82.113
Found Yes
Hash 8cd78fa77cca2946da59a3c9a3ac331d9fb473e71008fc463ed1a3aa4aadef9c
SimHash b8149d1bc774

Groups

*

Rule Path
Disallow /search/
Disallow /admin/

Other Records

Field Value
crawl-delay 10

Other Records

Field Value
sitemap https://www.dso.org/home/sitemap

Comments

  • robots.txt
  • This file is to prevent the crawling and indexing of certain parts
  • of your site by web crawlers and spiders run by sites like Yahoo!
  • and Google. By telling these "robots" where not to go on your site,
  • you save bandwidth and server resources.
  • This file will be ignored unless it is at the root of your host:
  • Used: http://example.com/robots.txt
  • Ignored: http://example.com/site/robots.txt
  • For more information about the robots.txt standard, see:
  • http://www.robotstxt.org/wc/robots.html
  • For syntax checking, see:
  • http://www.sxw.org.uk/computing/robots/check.html
  • Directories
  • Files
  • Paths (clean URLs)
  • Paths (no clean URLs)