docs.rs
robots.txt

Robots Exclusion Standard data for docs.rs

Resource Scan

Scan Details

Site Domain docs.rs
Base Domain docs.rs
Scan Status Ok
Last Scan2025-06-20T12:11:20+00:00
Next Scan 2025-07-20T12:11:20+00:00

Last Scan

Scanned2025-06-20T12:11:20+00:00
URL https://docs.rs/robots.txt
Redirect https://docs.rs/-/static/robots.txt
Domain IPs 2600:9000:271a:1e00:14:cae8:4080:93a1, 2600:9000:271a:5400:14:cae8:4080:93a1, 2600:9000:271a:5800:14:cae8:4080:93a1, 2600:9000:271a:6400:14:cae8:4080:93a1, 2600:9000:271a:8a00:14:cae8:4080:93a1, 2600:9000:271a:b200:14:cae8:4080:93a1, 2600:9000:271a:d600:14:cae8:4080:93a1, 2600:9000:271a:e200:14:cae8:4080:93a1, 3.165.75.53, 3.165.75.73, 3.165.75.80, 3.165.75.99
Response IP 3.165.75.80
Found Yes
Hash b0e763f831bf3c2bf5876806ce3b4ec9286c4ee47fbe1a8b8b72a0751de44b2b
SimHash aa589909a792

Groups

*

Rule Path
Disallow */%5E
Disallow */%5E
Disallow */~

Other Records

Field Value
sitemap https://docs.rs/sitemap.xml

Comments

  • Semver-based URL are always redirects, and sometimes
  • confuse Google's duplicate detection, so we block crawling them.
  • https://docs.rs/about/redirections
  • %5E is '^', URL-encoded. Based on the Search Console, Google
  • may be encoding '^' before checking against robots.txt.