bcldb.com
robots.txt

Robots Exclusion Standard data for bcldb.com

Resource Scan

Scan Details

Site Domain bcldb.com
Base Domain bcldb.com
Scan Status Failed
Failure StageFetching resource.
Failure ReasonCouldn't establish SSL connection.
Last Scan2025-09-09T10:32:46+00:00
Next Scan 2025-12-08T10:32:46+00:00

Last Successful Scan

Scanned2021-10-18T19:30:04+00:00
URL http://bcldb.com/robots.txt
Redirect http://www.bcldb.com/robots.txt
Redirect Domain www.bcldb.com
Redirect Base bcldb.com
Found Yes
Hash add426eece10ebaaf6509f2f83a13fcf31e2e7043af1c524056efb610ac53d46
SimHash 38941d08c774

Groups

*

Rule Path
Disallow /includes/
Disallow /misc/
Disallow /modules/
Disallow /profiles/
Disallow /scripts/
Disallow /themes/
Disallow /CHANGELOG.txt
Disallow /cron.php
Disallow /INSTALL.mysql.txt
Disallow /INSTALL.pgsql.txt
Disallow /install.php
Disallow /INSTALL.txt
Disallow /LICENSE.txt
Disallow /MAINTAINERS.txt
Disallow /update.php
Disallow /UPGRADE.txt
Disallow /xmlrpc.php
Disallow /admin/
Disallow /comment/reply/
Disallow /filter/tips/
Disallow /logout/
Disallow /node/add/
Disallow /search/
Disallow /user/register/
Disallow /user/password/
Disallow /user/login/
Disallow /?q=admin%2F
Disallow /?q=comment%2Freply%2F
Disallow /?q=filter%2Ftips%2F
Disallow /?q=logout%2F
Disallow /?q=node%2Fadd%2F
Disallow /?q=search%2F
Disallow /?q=user%2Fpassword%2F
Disallow /?q=user%2Fregister%2F
Disallow /?q=user%2Flogin%2F

Other Records

Field Value
crawl-delay 10

Comments

  • robots.txt
  • This file is to prevent the crawling and indexing of certain parts
  • of your site by web crawlers and spiders run by sites like Yahoo!
  • and Google. By telling these "robots" where not to go on your site,
  • you save bandwidth and server resources.
  • This file will be ignored unless it is at the root of your host:
  • Used: http://example.com/robots.txt
  • Ignored: http://example.com/site/robots.txt
  • For more information about the robots.txt standard, see:
  • http://www.robotstxt.org/robotstxt.html
  • For syntax checking, see:
  • http://www.frobee.com/robots-txt-check
  • Directories
  • Files
  • Paths (clean URLs)
  • Paths (no clean URLs)