arpadia.es
robots.txt

Robots Exclusion Standard data for arpadia.es

Resource Scan

Scan Details

Site Domain arpadia.es
Base Domain arpadia.es
Scan Status Ok
Last Scan2025-06-26T07:37:03+00:00
Next Scan 2025-07-26T07:37:03+00:00

Last Scan

Scanned2025-06-26T07:37:03+00:00
URL http://arpadia.es/robots.txt
Domain IPs 2001:8d8:100f:f000::239, 217.160.0.36
Response IP 217.160.0.36
Found Yes
Hash 3d33dfbc6512f88d7dcc1603e68c771368ede41979d42c253aca9c0ba6ad3447
SimHash 38949519c774

Groups

*

Rule Path
Disallow /includes/
Disallow /misc/
Disallow /modules/
Disallow /profiles/
Disallow /scripts/
Disallow /sites/
Disallow /themes/
Disallow /changelog.txt
Disallow /cron.php
Disallow /install.mysql.txt
Disallow /install.pgsql.txt
Disallow /install.php
Disallow /install.txt
Disallow /license.txt
Disallow /maintaners.txt
Disallow /update.php
Disallow /upgrade.txt
Disallow /xmlrpc.php
Disallow /admin/
Disallow /comment/reply/
Disallow /comment
Disallow /contact/
Disallow /logout/
Disallow /node/add/
Disallow /search/
Disallow /user/register/
Disallow /user/password/
Disallow /user/login/

Other Records

Field Value
crawl-delay 60

Comments

  • robots.txt
  • This file is to prevent the crawling and indexing of certain parts
  • of your site by web crawlers and spiders run by sites like Yahoo!
  • and Google. By telling these "robots" where not to go on your site,
  • you save bandwidth and server resources.
  • This file will be ignored unless it is at the root of your host:
  • Used: http://example.com/robots.txt
  • Ignored: http://example.com/site/robots.txt
  • For more information about the robots.txt standard, see:
  • http://www.robotstxt.org/robotstxt.html
  • Directories
  • Files
  • Paths (clean URLs)