netpoint.songtexte.com
robots.txt

Robots Exclusion Standard data for netpoint.songtexte.com

Resource Scan

Scan Details

Site Domain netpoint.songtexte.com
Base Domain songtexte.com
Scan Status Ok
Last Scan2024-04-13T02:12:08+00:00
Next Scan 2024-05-13T02:12:08+00:00

Last Scan

Scanned2024-04-13T02:12:08+00:00
URL https://netpoint.songtexte.com/robots.txt
Redirect https://www.songtexte.com/robots.txt
Redirect Domain www.songtexte.com
Redirect Base songtexte.com
Domain IPs 18.239.199.28, 18.239.199.71, 18.239.199.72, 18.239.199.84, 2600:9000:269b:2600:3:e8a3:f900:93a1, 2600:9000:269b:600:3:e8a3:f900:93a1, 2600:9000:269b:a000:3:e8a3:f900:93a1, 2600:9000:269b:a400:3:e8a3:f900:93a1, 2600:9000:269b:ce00:3:e8a3:f900:93a1, 2600:9000:269b:ec00:3:e8a3:f900:93a1, 2600:9000:269b:f000:3:e8a3:f900:93a1, 2600:9000:269b:f400:3:e8a3:f900:93a1
Redirect IPs 18.154.144.20, 18.154.144.6, 18.154.144.64, 18.154.144.70, 2600:9000:269b:1400:3:e8a3:f900:93a1, 2600:9000:269b:5e00:3:e8a3:f900:93a1, 2600:9000:269b:6800:3:e8a3:f900:93a1, 2600:9000:269b:a400:3:e8a3:f900:93a1, 2600:9000:269b:ce00:3:e8a3:f900:93a1, 2600:9000:269b:ec00:3:e8a3:f900:93a1, 2600:9000:269b:f200:3:e8a3:f900:93a1, 2600:9000:269b:f800:3:e8a3:f900:93a1
Response IP 18.165.171.60
Found Yes
Hash 86ef1ad60cb037dc00764df4f192e1243d30c6a2607f5a64b7a7b71b1166d9b2
SimHash b6301949eef7

Groups

grapeshot

Rule Path
Disallow /

msnbot

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 2

ubicrawler

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

chordie.com (php)

Rule Path
Disallow /

chordie.com webcrawler

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

wget

Rule Path
Disallow /

grub-client

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

npbot

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

*

Rule Path
Disallow /error/
Disallow /anmelden
Disallow /confirm/
Disallow /tracking.js

Other Records

Field Value
sitemap https://www.songtexte.com/sitemap/Sitemap.xml
sitemap https://www.songtexte.com/sitemap/news-sitemap.xml

Comments

  • robots.txt for Songtexte.com
  • adapted from for http://www.wikipedia.org/ and friends
  • Please note: There are a lot of pages on this site, and there are
  • some misbehaved spiders out there that go _way_ too fast. If you're
  • irresponsible, your access to the site may be blocked.
  • advertising-related bots:
  • User-agent: Mediapartners-Google*
  • Disallow: /
  • grapeshot often comes with 30+ crawlers at once
  • msnbot should crawl a bit slower
  • Crawlers that are kind enough to obey, but which we'd rather not have
  • unless they're feeding search engines.
  • Some bots are known to be trouble, particularly those designed to copy
  • entire sites. Please obey robots.txt.
  • Sorry, wget in its recursive mode is a frequent problem.
  • Please read the man page and use it properly; there is a
  • --wait option you can use to set the delay between hits,
  • for instance.
  • The 'grub' distributed client has been *very* poorly behaved.
  • Doesn't follow robots.txt anyway, but...
  • Hits many times per second, not acceptable
  • http://www.nameprotect.com/botinfo.html
  • A capture bot, downloads gazillions of pages with no public benefit
  • http://www.webreaper.net/
  • Friendly, low-speed bots are welcome viewing article pages, but not
  • dynamically-generated pages please.
  • Inktomi's "Slurp" can read a minimum delay between hits; if your
  • bot supports such a thing using the 'Crawl-delay' or another
  • instruction, please let us know.