statscrew.com
robots.txt

Robots Exclusion Standard data for statscrew.com

Resource Scan

Scan Details

Site Domain statscrew.com
Base Domain statscrew.com
Scan Status Ok
Last Scan2024-05-25T10:56:04+00:00
Next Scan 2024-06-01T10:56:04+00:00

Last Scan

Scanned2024-05-25T10:56:04+00:00
URL https://statscrew.com/robots.txt
Redirect https://www.statscrew.com/robots.txt
Redirect Domain www.statscrew.com
Redirect Base statscrew.com
Domain IPs 13.250.129.152, 2406:da18:9d0:143e:8e74:1b1a:98b9:2813, 2406:da18:9d0:143f:29e7:ae24:cfea:e9bb, 54.151.156.30
Redirect IPs 104.21.86.220, 172.67.137.63, 2606:4700:3037::6815:56dc, 2606:4700:3037::ac43:893f
Response IP 104.21.86.220
Found Yes
Hash 877439049e78de8bbcb980d8fe59ba463dc86d2ea924aed86af880c31257d5d9
SimHash 29b15d212555

Groups

*

Rule Path
Disallow /ads/

Other Records

Field Value
crawl-delay 30

Comments

  • ****************************************************************************
  • robots.txt
  • : Robots, spiders, and search engines use this file to detmine which
  • content they should *not* crawl while indexing your website.
  • : This system is called "The Robots Exclusion Standard."
  • : It is strongly encouraged to use a robots.txt validator to check
  • for valid syntax before any robots read it!
  • Examples:
  • Instruct all robots to stay out of the admin area.
  • : User-agent: *
  • : Disallow: /admin/
  • Restrict Google and MSN from indexing your images.
  • : User-agent: Googlebot
  • : Disallow: /images/
  • : User-agent: MSNBot
  • : Disallow: /images/
  • ****************************************************************************