rabota.at
robots.txt

Robots Exclusion Standard data for rabota.at

Resource Scan

Scan Details

Site Domain rabota.at
Base Domain rabota.at
Scan Status Ok
Last Scan2026-03-05T18:46:50+00:00
Next Scan 2026-03-12T18:46:50+00:00

Last Scan

Scanned2026-03-05T18:46:50+00:00
URL https://rabota.at/robots.txt
Redirect http://www.rabota.at/robots.txt
Redirect Domain www.rabota.at
Redirect Base rabota.at
Domain IPs 91.200.40.64
Redirect IPs 91.200.40.64
Response IP 91.200.40.64
Found Yes
Hash fd75a10e58f379dc97ab4230fa396f2c4ee13a87c581565eca9add898410be0d
SimHash 3a941d08c77c

Groups

*

Rule Path
Disallow /includes/
Disallow /modules/
Disallow /profiles/
Disallow /scripts/
Disallow /job-
Disallow /impressum
Disallow /de/impressum
Disallow /ru/impressum
Disallow /werbung
Disallow /reklama
Disallow /ru/reklama
Disallow /de/werbung
Disallow /CHANGELOG.txt
Disallow /cron.php
Disallow /INSTALL.mysql.txt
Disallow /INSTALL.pgsql.txt
Disallow /install.php
Disallow /INSTALL.txt
Disallow /LICENSE.txt
Disallow /MAINTAINERS.txt
Disallow /update.php
Disallow /UPGRADE.txt
Disallow /xmlrpc.php
Disallow /admin/
Disallow /comment/reply/
Disallow /filter/tips/
Disallow /logout/
Disallow /node/add/
Disallow /search/
Disallow /user/register/
Disallow /user/password/
Disallow /user/login/
Disallow /?q=admin%2F
Disallow /?q=comment%2Freply%2F
Disallow /?q=filter%2Ftips%2F
Disallow /?q=logout%2F
Disallow /?q=node%2Fadd%2F
Disallow /?q=search%2F
Disallow /?q=user%2Fpassword%2F
Disallow /?q=user%2Fregister%2F
Disallow /?q=user%2Flogin%2F

Other Records

Field Value
crawl-delay 10

Comments

  • robots.txt
  • This file is to prevent the crawling and indexing of certain parts
  • of your site by web crawlers and spiders run by sites like Yahoo!
  • and Google. By telling these "robots" where not to go on your site,
  • you save bandwidth and server resources.
  • This file will be ignored unless it is at the root of your host:
  • Used: http://example.com/robots.txt
  • Ignored: http://example.com/site/robots.txt
  • For more information about the robots.txt standard, see:
  • http://www.robotstxt.org/robotstxt.html
  • For syntax checking, see:
  • http://www.frobee.com/robots-txt-check
  • Directories
  • Files
  • Paths (clean URLs)
  • Paths (no clean URLs)