truckoncheesewedge.com
robots.txt

Robots Exclusion Standard data for truckoncheesewedge.com

Resource Scan

Scan Details

Site Domain truckoncheesewedge.com
Base Domain truckoncheesewedge.com
Scan Status Failed
Failure StageFetching resource.
Failure ReasonCouldn't connect to server.
Last Scan2024-05-25T13:47:26+00:00
Next Scan 2024-07-24T13:47:26+00:00

Last Successful Scan

Scanned2024-03-04T12:04:45+00:00
URL https://www.truckoncheesewedge.com/robots.txt
Domain IPs 208.113.198.19
Response IP 208.113.198.19
Found Yes
Hash 8ff1aed945be7f2a97942f68c0bc3f574d2b66d73ccec61ae9a73fe83f66167d
SimHash 3a941d08cd74

Groups

*

Rule Path
Disallow /includes/
Disallow /misc/
Disallow /modules/
Disallow /profiles/
Disallow /scripts/
Disallow /sites/
Disallow /themes/
Disallow /CHANGELOG.txt
Disallow /cron.php
Disallow /INSTALL.mysql.txt
Disallow /INSTALL.pgsql.txt
Disallow /install.php
Disallow /INSTALL.txt
Disallow /LICENSE.txt
Disallow /MAINTAINERS.txt
Disallow /update.php
Disallow /UPGRADE.txt
Disallow /xmlrpc.php
Disallow /admin/
Disallow /comment/reply/
Disallow /contact/
Disallow /logout/
Disallow /node/add/
Disallow /search/
Disallow /user/register/
Disallow /user/password/
Disallow /user/login/
Disallow /?q=admin%2F
Disallow /?q=comment%2Freply%2F
Disallow /?q=contact%2F
Disallow /?q=logout%2F
Disallow /?q=node%2Fadd%2F
Disallow /?q=search%2F
Disallow /?q=user%2Fpassword%2F
Disallow /?q=user%2Fregister%2F
Disallow /?q=user%2Flogin%2F

Other Records

Field Value
crawl-delay 10

Comments

  • $Id: robots.txt,v 1.9.2.1 2008/12/10 20:12:19 goba Exp $
  • robots.txt
  • This file is to prevent the crawling and indexing of certain parts
  • of your site by web crawlers and spiders run by sites like Yahoo!
  • and Google. By telling these "robots" where not to go on your site,
  • you save bandwidth and server resources.
  • This file will be ignored unless it is at the root of your host:
  • Used: http://example.com/robots.txt
  • Ignored: http://example.com/site/robots.txt
  • For more information about the robots.txt standard, see:
  • http://www.robotstxt.org/wc/robots.html
  • For syntax checking, see:
  • http://www.sxw.org.uk/computing/robots/check.html
  • Directories
  • Files
  • Paths (clean URLs)
  • Paths (no clean URLs)