austiblox.net
robots.txt

Robots Exclusion Standard data for austiblox.net

Resource Scan

Scan Details

Site Domain austiblox.net
Base Domain austiblox.net
Scan Status Failed
Failure StageFetching resource.
Failure ReasonServer returned a client error.
Last Scan4/20/2025, 7:05:51 AM
Next Scan 4/27/2025, 7:05:51 AM

Last Successful Scan

Scanned4/5/2025, 7:04:46 AM
URL https://austiblox.net/robots.txt
Domain IPs 104.21.2.49, 172.67.152.136, 2606:4700:3030::6815:231, 2606:4700:3033::ac43:9888
Response IP 104.21.2.49
Found Yes
Hash 6550443b1dd80e65e4075f407f01e7d1a4fbbbbce4e42e469e9ffdd82b48e916
SimHash 3c17115a4775

Groups

*

Rule Path
Disallow /ow_version.xml
Disallow /INSTALL.txt
Disallow /LICENSE.txt
Disallow /README.txt
Disallow /UPDATE.txt
Disallow /CHANGELOG.txt
Disallow /admin/

ia_archiver

Rule Path
Disallow /

Comments

  • This file contains rules to prevent the crawling and indexing of certain parts
  • of your web site by spiders of a major search engines likes Google and Yahoo.
  • By managing these rules you can allow or disallow access to specific folders
  • and files for such spyders.
  • The good way to hide private data or save a lot of bandwidth.
  • For more information about the robots.txt standard, see:
  • http://www.robotstxt.org/wc/robots.html
  • For syntax checking, see:
  • http://www.sxw.org.uk/computing/robots/check.html