str.sg
robots.txt

Robots Exclusion Standard data for str.sg

Resource Scan

Scan Details

Site Domain str.sg
Base Domain str.sg
Scan Status Failed
Failure StageFetching resource.
Failure ReasonServer returned a client error.
Last Scan2024-10-31T15:49:41+00:00
Next Scan 2024-11-14T15:49:41+00:00

Last Successful Scan

Scanned2024-01-01T02:12:03+00:00
URL https://str.sg/robots.txt
Domain IPs 13.227.254.100, 13.227.254.125, 13.227.254.42, 13.227.254.49
Response IP 52.222.169.79
Found Yes
Hash 3f0808efbcc80d5c774eb49958ccd2586e1d3c1fc0044422145eea280d9133ed
SimHash b8109d0b4764

Groups

*

Rule Path
Disallow /

Other Records

Field Value
crawl-delay 10

Comments

  • robots.txt
  • This file is to prevent the crawling and indexing of certain parts
  • of your site by web crawlers and spiders run by sites like Yahoo!
  • and Google. By telling these "robots" where not to go on your site,
  • you save bandwidth and server resources.
  • This file will be ignored unless it is at the root of your host:
  • Used: http://example.com/robots.txt
  • Ignored: http://example.com/site/robots.txt
  • For more information about the robots.txt standard, see:
  • http://www.robotstxt.org/robotstxt.html
  • For syntax checking, see:
  • http://www.frobee.com/robots-txt-check
  • Directories