autocomplete.byojet.com
robots.txt

Robots Exclusion Standard data for autocomplete.byojet.com

Resource Scan

Scan Details

Site Domain autocomplete.byojet.com
Base Domain byojet.com
Scan Status Failed
Failure StageFetching resource.
Failure ReasonCouldn't connect to server.
Last Scan2025-10-01T17:45:25+00:00
Next Scan 2025-12-30T17:45:25+00:00

Last Successful Scan

Scanned2021-11-09T22:24:28+00:00
URL https://autocomplete.byojet.com/robots.txt
Found Yes
Hash 63cefe69dd63b0a428650e4112f343300106f1c4a29fa042c1a4bcebb76b2c9c
SimHash 3a941d08c774

Groups

*

Rule Path
Disallow /contact-us
Disallow /includes/
Disallow /misc/
Disallow /modules/
Disallow /profiles/
Disallow /scripts/
Disallow /themes/
Disallow /book/
Disallow /CHANGELOG.txt
Disallow /cron.php
Disallow /INSTALL.mysql.txt
Disallow /INSTALL.pgsql.txt
Disallow /INSTALL.sqlite.txt
Disallow /install.php
Disallow /INSTALL.txt
Disallow /LICENSE.txt
Disallow /MAINTAINERS.txt
Disallow /update.php
Disallow /UPGRADE.txt
Disallow /xmlrpc.php
Disallow /admin/
Disallow /comment/reply/
Disallow /filter/tips/
Disallow /node/add/
Disallow /search/
Disallow /user/register/
Disallow /user/password/
Disallow /user/login/
Disallow /user/logout/
Disallow /?q=admin%2F
Disallow /?q=comment%2Freply%2F
Disallow /?q=filter%2Ftips%2F
Disallow /?q=node%2Fadd%2F
Disallow /?q=search%2F
Disallow /?q=user%2Fpassword%2F
Disallow /?q=user%2Fregister%2F
Disallow /?q=user%2Flogin%2F
Disallow /?q=user%2Flogout%2F
Disallow /account/changebooking/thanks
Disallow /

Other Records

Field Value
sitemap https://byojet.com.au/sitemap.xml

Comments

  • robots.txt
  • This file is to prevent the crawling and indexing of certain parts
  • of your site by web crawlers and spiders run by sites like Yahoo!
  • and Google. By telling these "robots" where not to go on your site,
  • you save bandwidth and server resources.
  • This file will be ignored unless it is at the root of your host:
  • Used: http://example.com/robots.txt
  • Ignored: http://example.com/site/robots.txt
  • For more information about the robots.txt standard, see:
  • http://www.robotstxt.org/robotstxt.html
  • For syntax checking, see:
  • http://www.frobee.com/robots-txt-check
  • Directories
  • Files
  • Paths (clean URLs)
  • Paths (no clean URLs)
  • Disallow all on all non prod domains