statscrew.com
robots.txt

Robots Exclusion Standard data for statscrew.com

Resource Scan

Scan Details

Site Domain statscrew.com
Base Domain statscrew.com
Scan Status Ok
Last Scan2024-11-09T18:11:12+00:00
Next Scan 2024-11-16T18:11:12+00:00

Last Scan

Scanned2024-11-09T18:11:12+00:00
URL https://statscrew.com/robots.txt
Redirect https://www.statscrew.com/robots.txt
Redirect Domain www.statscrew.com
Redirect Base statscrew.com
Domain IPs 2406:da18:9d0:143f:2124:4e9c:36a9:d9de, 52.221.42.138
Redirect IPs 104.21.86.220, 172.67.137.63, 2606:4700:3037::6815:56dc, 2606:4700:3037::ac43:893f
Response IP 172.67.137.63
Found Yes
Hash 877439049e78de8bbcb980d8fe59ba463dc86d2ea924aed86af880c31257d5d9
SimHash 29b15d212555

Groups

*

Rule Path
Disallow /ads/

Other Records

Field Value
crawl-delay 30

Comments

  • ****************************************************************************
  • robots.txt
  • : Robots, spiders, and search engines use this file to detmine which
  • content they should *not* crawl while indexing your website.
  • : This system is called "The Robots Exclusion Standard."
  • : It is strongly encouraged to use a robots.txt validator to check
  • for valid syntax before any robots read it!
  • Examples:
  • Instruct all robots to stay out of the admin area.
  • : User-agent: *
  • : Disallow: /admin/
  • Restrict Google and MSN from indexing your images.
  • : User-agent: Googlebot
  • : Disallow: /images/
  • : User-agent: MSNBot
  • : Disallow: /images/
  • ****************************************************************************