spectator.org
robots.txt

Robots Exclusion Standard data for spectator.org

Resource Scan

Scan Details

Site Domain spectator.org
Base Domain spectator.org
Scan Status Ok
Last Scan2024-09-20T04:13:03+00:00
Next Scan 2024-09-27T04:13:03+00:00

Last Scan

Scanned2024-09-20T04:13:03+00:00
URL https://spectator.org/robots.txt
Domain IPs 192.190.221.34
Response IP 192.190.221.34
Found Yes
Hash c25925b8d39375eebb27ecc3a8ea0b92606ddeed1962d1e31f126a6b046f928c
SimHash 29951d216555

Groups

*

Rule Path
Disallow

Other Records

Field Value
sitemap https://spectator.org/sitemap_index.xml

Comments

  • ****************************************************************************
  • robots.txt
  • : Robots, spiders, and search engines use this file to detmine which
  • content they should *not* crawl while indexing your website.
  • : This system is called "The Robots Exclusion Standard."
  • : It is strongly encouraged to use a robots.txt validator to check
  • for valid syntax before any robots read it!
  • Examples:
  • Instruct all robots to stay out of the admin area.
  • : User-agent: *
  • : Disallow: /admin/
  • Restrict Google and MSN from indexing your images.
  • : User-agent: Googlebot
  • : Disallow: /images/
  • : User-agent: MSNBot
  • : Disallow: /images/
  • ****************************************************************************