old.kreatos.be
robots.txt

Robots Exclusion Standard data for old.kreatos.be

Resource Scan

Scan Details

Site Domain old.kreatos.be
Base Domain kreatos.be
Scan Status Ok
Last Scan2025-08-10T17:24:12+00:00
Next Scan 2025-09-09T17:24:12+00:00

Last Scan

Scanned2025-08-10T17:24:12+00:00
URL https://old.kreatos.be/robots.txt
Domain IPs 178.208.62.74
Response IP 178.208.62.74
Found Yes
Hash e054f06b2b1e92228f8e9f742cd00c5b4c7a4456489309e612eacb115d5e848c
SimHash 789efd00cf50

Groups

*

Rule Path
Disallow /includes/
Disallow /misc/
Disallow /modules/
Disallow /profiles/
Disallow /scripts/
Disallow /themes/
Disallow /kreanet
Disallow /kreanet/
Disallow /kreanet/*
Disallow /CHANGELOG.txt
Disallow /cron.php
Disallow /INSTALL.mysql.txt
Disallow /INSTALL.pgsql.txt
Disallow /INSTALL.sqlite.txt
Disallow /install.php
Disallow /INSTALL.txt
Disallow /LICENSE.txt
Disallow /MAINTAINERS.txt
Disallow /update.php
Disallow /UPGRADE.txt
Disallow /xmlrpc.php
Disallow /admin/
Disallow /comment/reply/
Disallow /filter/tips/
Disallow /node/add/
Disallow /search/
Disallow /user/register/
Disallow /user/password/
Disallow /user/login/
Disallow /user/logout/
Disallow /vip/register/
Disallow /vip/password/
Disallow /vip/login/
Disallow /vip/logout/
Disallow /vip/register
Disallow /vip/password
Disallow /vip/login
Disallow /vip/logout
Disallow /vip/
Disallow /?q=admin%2F
Disallow /?q=comment%2Freply%2F
Disallow /?q=filter%2Ftips%2F
Disallow /?q=node%2Fadd%2F
Disallow /?q=search%2F
Disallow /?q=user%2Fpassword%2F
Disallow /?q=user%2Fregister%2F
Disallow /?q=user%2Flogin%2F
Disallow /?q=user%2Flogout%2F
Disallow /?q=vip%2Fpassword%2F
Disallow /?q=vip%2Fregister%2F
Disallow /?q=vip%2Flogin%2F
Disallow /?q=vip%2Flogout%2F
Disallow /?q=vip%2Fpassword
Disallow /?q=vip%2Fregister
Disallow /?q=vip%2Flogin
Disallow /?q=vip%2Flogout
Disallow /?q=vip%2F

Other Records

Field Value
crawl-delay 10

crawler4j
crawler4j (http://code.google.com/p/crawler4j/)
curious george - www.analyticsseo.com/crawler
curious george - www.analyticsseo.com
megaindex.ru/2.0
megaindex.ru/
mozilla/5.0 (compatible; megaindex.ru/2.0; +https://www.megaindex.ru/?tab=linkanalyze)
mozilla/5.0 (compatible; mj12bot/v1.4.5; http://www.majestic12.co.uk/bot.php?+)
mj12
mozilla/5.0 (compatible; steeler/3.5; http://www.tkl.iis.u-tokyo.ac.jp/~crawler/)
steeler
microsoft-webdav-miniredir/6.1.7601
mozilla/5.0 (compatible; findxbot/1.0; +http://www.findxbot.com)
findxbot
mozilla/5.0 (compatible; seznambot/3.2; +http://fulltext.sblog.cz/)
seznambot
seznam
mozilla/4.0 (compatible; vagabondo/4.0; http://webagent.wise-guys.nl/)
mozilla/4.0 (compatible; vagabondo/4.0; webcrawler at wise-guys dot nl; http://webagent.wise-guys.nl/; http://www.wise-guys.nl/)
vagabondo
jakarta commons-httpclient/3.0.1
lwp::simple/5.822
mozilla/5.0 (compatible; smtbot/1.0; +http://www.similartech.com/smtbot)
smtbot
mozilla/5.0 (compatible; memorybot/1.22.56 +http://internetmemory.org/en/)
memorybot
mozilla/5.0 (compatible; smrjbot/0.0.20)
smrjbot
mozilla/5.0 (compatible; spbot/4.4.2; +http://openlinkprofiler.org/bot )
spbot
mj12bot
baiduspider
exabot
yandex
bspider

Rule Path
Disallow /

Comments

  • robots.txt
  • This file is to prevent the crawling and indexing of certain parts
  • of your site by web crawlers and spiders run by sites like Yahoo!
  • and Google. By telling these "robots" where not to go on your site,
  • you save bandwidth and server resources.
  • This file will be ignored unless it is at the root of your host:
  • Used: http://example.com/robots.txt
  • Ignored: http://example.com/site/robots.txt
  • For more information about the robots.txt standard, see:
  • http://www.robotstxt.org/robotstxt.html
  • For syntax checking, see:
  • http://www.frobee.com/robots-txt-check
  • Directories
  • Files
  • Paths (clean URLs)
  • Paths (no clean URLs)
  • badbots

Warnings

  • 2 invalid lines.