sailingresources.org.au
robots.txt

Robots Exclusion Standard data for sailingresources.org.au

Resource Scan

Scan Details

Site Domain sailingresources.org.au
Base Domain sailingresources.org.au
Scan Status Failed
Failure StageFetching resource.
Failure ReasonServer returned a client error.
Last Scan2024-08-24T08:31:54+00:00
Next Scan 2024-11-22T08:31:54+00:00

Last Successful Scan

Scanned2024-01-28T08:30:23+00:00
URL https://sailingresources.org.au/robots.txt
Domain IPs 13.33.88.33, 13.33.88.39, 13.33.88.72, 13.33.88.74
Response IP 99.86.91.72
Found Yes
Hash 88d8ca40e36265ae01b675dca281af019946894b191b68368373934f0b1b659f
SimHash ea149d014775

Groups

*

Rule Path
Disallow /

proximic

Rule Path
Disallow /

googlebot
googlebot-image
mediapartners-google
slurp
yahoo-blogs
yahoo-mmcrawler
browsershots
twitterbot

Rule Path
Allow /
Disallow /includes/
Disallow /reports/
Disallow /scripts/

bingbot

Rule Path
Disallow

Other Records

Field Value
crawl-delay 5

Comments

  • $Id: robots.txt,v 1.9.2.1 2008/12/10 20:12:19 goba Exp $
  • robots.txt
  • This file is to prevent the crawling and indexing of certain parts
  • of your site by web crawlers and spiders run by sites like Yahoo!
  • and Google. By telling these "robots" where not to go on your site,
  • you save bandwidth and server resources.
  • This file will be ignored unless it is at the root of your host:
  • Used: http://example.com/robots.txt
  • Ignored: http://example.com/site/robots.txt
  • For more information about the robots.txt standard, see:
  • http://www.robotstxt.org/wc/robots.html
  • For syntax checking, see:
  • http://www.sxw.org.uk/computing/robots/check.html
  • disallow all
  • but allow only important bots
  • User-agent: msnbot #### ignored for violation of robots file
  • User-agent: msnbot-media #### ignored for violation of robots file
  • Directories