e-marina.eu
robots.txt

Robots Exclusion Standard data for e-marina.eu

Resource Scan

Scan Details

Site Domain e-marina.eu
Base Domain e-marina.eu
Scan Status Ok
Last Scan2025-12-28T05:18:35+00:00
Next Scan 2026-01-27T05:18:35+00:00

Last Scan

Scanned2025-12-28T05:18:35+00:00
URL https://e-marina.eu/robots.txt
Redirect http://www.e-marina.eu/robots.txt
Redirect Domain www.e-marina.eu
Redirect Base e-marina.eu
Domain IPs 185.110.48.12
Redirect IPs 185.110.48.12
Response IP 185.110.48.12
Found Yes
Hash a1f69628cc62de83640f4bf421cebb03acd1e6e0242612cd47ef3b971f7a7e03
SimHash 3e941509cd74

Groups

*

Rule Path
Disallow /database/
Disallow /includes/
Disallow /misc/
Disallow /modules/
Disallow /sites/
Disallow /themes/
Disallow /scripts/
Disallow /updates/
Disallow /profiles/
Disallow /xmlrpc.php
Disallow /cron.php
Disallow /update.php
Disallow /install.php
Disallow /INSTALL.txt
Disallow /INSTALL.mysql.txt
Disallow /INSTALL.pgsql.txt
Disallow /CHANGELOG.txt
Disallow /MAINTAINERS.txt
Disallow /LICENSE.txt
Disallow /UPGRADE.txt
Disallow /admin/
Disallow /comment/reply/
Disallow /contact/
Disallow /logout/
Disallow /node/add/
Disallow /search/
Disallow /user/register/
Disallow /user/password/
Disallow /user/login/
Disallow /?q=admin%2F
Disallow /?q=comment%2Freply%2F
Disallow /?q=contact%2F
Disallow /?q=logout%2F
Disallow /?q=node%2Fadd%2F
Disallow /?q=search%2F
Disallow /?q=user%2Fpassword%2F
Disallow /?q=user%2Fregister%2F
Disallow /?q=user%2Flogin%2F

Other Records

Field Value
crawl-delay 10

Comments

  • $Id: robots.txt,v 1.7.2.2 2008/02/25 02:18:25 drumm Exp $
  • robots.txt
  • This file is to prevent the crawling and indexing of certain parts
  • of your site by web crawlers and spiders run by sites like Yahoo!
  • and Google. By telling these "robots" where not to go on your site,
  • you save bandwidth and server resources.
  • This file will be ignored unless it is at the root of your host:
  • Used: http://example.com/robots.txt
  • Ignored: http://example.com/site/robots.txt
  • For more information about the robots.txt standard, see:
  • http://www.robotstxt.org/wc/robots.html
  • For syntax checking, see:
  • http://www.sxw.org.uk/computing/robots/check.html
  • Directories
  • Files
  • Paths (clean URLs)
  • Paths (no clean URLs)