freetools.seobility.net
robots.txt

Robots Exclusion Standard data for freetools.seobility.net

Resource Scan

Scan Details

Site Domain freetools.seobility.net
Base Domain seobility.net
Scan Status Ok
Last Scan2024-09-09T11:15:00+00:00
Next Scan 2024-10-09T11:15:00+00:00

Last Scan

Scanned2024-09-09T11:15:00+00:00
URL https://freetools.seobility.net/robots.txt
Domain IPs 104.21.44.30, 172.67.194.85, 2606:4700:3030::6815:2c1e, 2606:4700:3030::ac43:c255
Response IP 172.67.194.85
Found Yes
Hash d9ab879e187c9d72773a5f3969d39ccb4666d5bca215345bfbed2ff260e953fb
SimHash a79c45c9f5b5

Groups

*

Rule Path
Disallow /outbound/
Disallow /outbound/redirect.go
Disallow /outbound/redirect.go*
Disallow /de/seocheck/breuer-versand.de
Disallow /en/seocheck/breuer-versand.de
Disallow /es/seochecker/breuer-versand.de
Disallow /de/seocheck/tierhygiene.net
Disallow /en/seocheck/tierhygiene.net
Disallow /es/seochecker/tierhygiene.net
Disallow /de/seocheck/milchkannen24.de
Disallow /en/seocheck/milchkannen24.de
Disallow /es/seochecker/milchkannen24.de
Disallow /en/keywordchecker/check*
Disallow /de/keywordcheck/check*
Disallow /es/keywordchecker/check*
Disallow /en/seocompare/check*
Disallow /de/seocompare/check*
Disallow /es/comparador-web-seo/check*
Disallow /de/rankingcheck/check*
Disallow /en/rankingcheck/check*
Disallow /es/serpchecker/check*
Disallow /de/seocheck/pdfexport*
Disallow /en/seocheck/pdfexport*
Disallow /es/seochecker/pdfexport*
Disallow /pdfexample/
Disallow /de/wiki/api.php
Disallow /en/wiki/api.php
Disallow /es/wiki/api.php

seobility

Rule Path
Disallow /en/keywordcheck/
Disallow /de/keywordcheck/
Disallow /es/keywordchecker/
Disallow /en/seocompare/
Disallow /de/seocompare/
Disallow /es/comparador-web-seo/
Disallow /outbound/
Disallow /outbound/redirect.go
Disallow /outbound/redirect.go*
Disallow /pdfexample/
Disallow /de/seocheck/
Disallow /en/seocheck/
Disallow /es/seochecker/
Disallow /de/rankingcheck/
Disallow /en/rankingcheck/
Disallow /es/serpchecker/
Disallow /de/wdf-idf-tool/
Disallow /en/tf-idf-keyword-tool/
Disallow /es/herramienta-tf-idf/

yandex

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 15

slurp

Rule Path
Disallow /de/seocheck/
Disallow /en/seocheck/
Disallow /es/seochecker/

ia_archiver

Rule Path
Disallow /de/seocheck/
Disallow /en/seocheck/
Disallow /es/seochecker/

stackrambler

Rule Path
Disallow /de/seocheck/
Disallow /en/seocheck/
Disallow /es/seochecker/

baiduspider

Rule Path
Disallow /de/seocheck/
Disallow /en/seocheck/
Disallow /es/seochecker/

sogou

Rule Path
Disallow /de/seocheck/
Disallow /en/seocheck/
Disallow /es/seochecker/

sogou web spider

Rule Path
Disallow /de/seocheck/
Disallow /en/seocheck/
Disallow /es/seochecker/

semrushbot

Rule Path
Disallow /

semrushbot-sa

Rule Path
Disallow /

mj12bot

Rule Path
Disallow /de/seocheck/
Disallow /en/seocheck/
Disallow /es/seochecker/

ahrefsbot

Rule Path
Disallow /

dotbot

Rule Path
Disallow /

zookabot

Rule Path
Disallow /

proximic

Rule Path
Disallow /

crystalsemanticsbot

Rule Path
Disallow /

larbin

Rule Path
Disallow /

blexbot

Rule Path
Disallow /

ubicrawler

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

sogou web spider

Rule Path
Disallow /

grub-client

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

npbot

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

Comments

  • FROM http://de.wikipedia.org/robots.txt
  • Crawlers that are kind enough to obey, but which we'd rather not have
  • unless they're feeding search engines.
  • Some bots are known to be trouble, particularly those designed to copy
  • entire sites. Please obey robots.txt.
  • The 'grub' distributed client has been *very* poorly behaved.
  • Doesn't follow robots.txt anyway, but...
  • Hits many times per second, not acceptable
  • http://www.nameprotect.com/botinfo.html
  • A capture bot, downloads gazillions of pages with no public benefit
  • http://www.webreaper.net/

Warnings

  • 2 invalid lines.