work.cibhs.org
robots.txt

Robots Exclusion Standard data for work.cibhs.org

Resource Scan

Scan Details

Site Domain work.cibhs.org
Base Domain cibhs.org
Scan Status Failed
Failure StageFetching resource.
Failure ReasonCouldn't connect to server.
Last Scan2025-09-17T02:06:49+00:00
Next Scan 2025-12-16T02:06:49+00:00

Last Successful Scan

Scanned2023-02-07T20:36:55+00:00
URL https://work.cibhs.org/robots.txt
Domain IPs 23.185.0.2
Response IP 23.185.0.2
Found Yes
Hash 931ccbd830e3d86e68ea7473ac0ed302011aae0b70d5a572ece1809804e497d3
SimHash 3a941d18c774

Groups

*

Rule Path
Disallow /includes/
Disallow /misc/
Disallow /modules/
Disallow /profiles/
Disallow /scripts/
Disallow /themes/
Disallow /CHANGELOG.txt
Disallow /cron.php
Disallow /INSTALL.mysql.txt
Disallow /INSTALL.pgsql.txt
Disallow /install.php
Disallow /INSTALL.txt
Disallow /LICENSE.txt
Disallow /MAINTAINERS.txt
Disallow /update.php
Disallow /UPGRADE.txt
Disallow /xmlrpc.php
Disallow /admin/
Disallow /comment/reply/
Disallow /filter/tips/
Disallow /logout/
Disallow /node/add/
Disallow /search/
Disallow /find/
Disallow /calendar/
Disallow /user/register/
Disallow /user/password/
Disallow /help
Disallow /?q=admin%2F
Disallow /?q=comment%2Freply%2F
Disallow /?q=filter%2Ftips%2F
Disallow /?q=logout%2F
Disallow /?q=node%2Fadd%2F
Disallow /?q=search%2F
Disallow /?q=find%2F
Disallow /?q=calendar%2F
Disallow /?q=user%2Fpassword%2F
Disallow /?q=user%2Fregister%2F
Disallow /?q=user%2Flogin%2F
Disallow /?q=help

Other Records

Field Value
crawl-delay 10

Comments

  • robots.txt
  • This file is to prevent the crawling and indexing of certain parts
  • of your site by web crawlers and spiders run by sites like Yahoo!
  • and Google. By telling these "robots" where not to go on your site,
  • you save bandwidth and server resources.
  • This file will be ignored unless it is at the root of your host:
  • Used: http://example.com/robots.txt
  • Ignored: http://example.com/site/robots.txt
  • For more information about the robots.txt standard, see:
  • http://www.robotstxt.org/robotstxt.html
  • For syntax checking, see:
  • http://www.frobee.com/robots-txt-check
  • Directories
  • Files
  • Paths (clean URLs)
  • Disallow: /user/login/
  • Paths (no clean URLs)