wikicryptocoins.com
robots.txt

Robots Exclusion Standard data for wikicryptocoins.com

Resource Scan

Scan Details

Site Domain wikicryptocoins.com
Base Domain wikicryptocoins.com
Scan Status Ok
Last Scan2024-09-13T07:34:05+00:00
Next Scan 2024-10-13T07:34:05+00:00

Last Scan

Scanned2024-09-13T07:34:05+00:00
URL https://wikicryptocoins.com/robots.txt
Domain IPs 104.21.18.231, 172.67.183.224, 2606:4700:3030::6815:12e7, 2606:4700:3034::ac43:b7e0
Response IP 172.67.183.224
Found Yes
Hash 49afa55cf2812d2b9c853cbd04ce8124eb33490ba9ad390ccdd499dd2a335436
SimHash a25809c9efd7

Groups

*

Rule Path
Allow /wiki/Special%3AStaticMap
Allow /currency/Special%3AStaticMap
Disallow /index.php?
Disallow /api.php
Disallow /*feed%3Drss
Disallow /*action%3Dedit
Disallow /*action%3Dhistory
Disallow /*action%3Ddelete
Disallow /*action%3Dwatch
Disallow /index.php/Help
Disallow /index.php/MediaWiki
Disallow /index.php/Special%3A
Disallow /index.php/User%3A
Disallow /index.php/Talk%3A
Disallow /index.php/Template%3A
Disallow /index.php/Form%3A
Disallow /wiki/Help
Disallow /wiki/MediaWiki
Disallow /wiki/Special%3A
Disallow /wiki/User%3A
Disallow /wiki/Talk%3A
Disallow /wiki/Template%3A
Disallow /wiki/Form%3A
Disallow /currency/Help
Disallow /currency/MediaWiki
Disallow /currency/Special%3A
Disallow /currency/User%3A
Disallow /currency/Talk%3A
Disallow /currency/Template%3A
Disallow /currency/Form%3A
Disallow /index.php/Especial%3A
Disallow /index.php/Usuario%3A
Disallow /index.php/Plantilla
Disallow /index.php/Discusi%C3%83%C2%B3n%3A
Disallow /wiki/Especial%3A
Disallow /wiki/Usuario%3A
Disallow /wiki/Plantilla
Disallow /wiki/Discusi%C3%83%C2%B3n%3A
Disallow /currency/Especial%3A
Disallow /currency/Usuario%3A
Disallow /currency/Plantilla
Disallow /currency/Discusi%C3%83%C2%B3n
Disallow /index.php/Spezial%3A
Disallow /index.php/Benutzer%3A
Disallow /index.php/Vorlage
Disallow /index.php/Benutzer_Diskussion%3A
Disallow /wiki/Spezial%3A
Disallow /wiki/Benutzer%3A
Disallow /wiki/Vorlage
Disallow /wiki/Benutzer_Diskussion%3A
Disallow /currency/Spezial%3A
Disallow /currency/Benutzer%3A
Disallow /currency/Vorlage
Disallow /currency/Benutzer_Diskussion%3A

mediapartners-google*

Rule Path
Disallow /

ubicrawler

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

wget

Rule Path
Disallow /

grub-client

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

npbot

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

hmse_robot

Rule Path
Disallow /

xovibot

Rule Path
Disallow /

Other Records

Field Value
sitemap https://wikicryptocoins.com/pro-sitemaps-4208498.php

Comments

  • advertising-related bots:
  • Crawlers that are kind enough to obey, but which we'd rather not have
  • unless they're feeding search engines.
  • Some bots are known to be trouble, particularly those designed to copy
  • entire sites. Please obey robots.txt.
  • Sorry, wget in its recursive mode is a frequent problem.
  • Please read the man page and use it properly; there is a
  • --wait option you can use to set the delay between hits,
  • for instance.
  • The 'grub' distributed client has been *very* poorly behaved.
  • Doesn't follow robots.txt anyway, but...
  • Hits many times per second, not acceptable
  • http://www.nameprotect.com/botinfo.html
  • A capture bot, downloads gazillions of pages with no public benefit
  • http://www.webreaper.net/