www-xataka-com.nproxy.org
robots.txt

Robots Exclusion Standard data for www-xataka-com.nproxy.org

Resource Scan

Scan Details

Site Domain www-xataka-com.nproxy.org
Base Domain nproxy.org
Scan Status Failed
Failure StageFetching resource.
Failure ReasonServer returned a server error.
Last Scan2024-11-17T00:30:28+00:00
Next Scan 2024-11-24T00:30:28+00:00

Last Successful Scan

Scanned2024-10-17T00:10:15+00:00
URL http://www-xataka-com.nproxy.org/robots.txt
Domain IPs 146.59.252.180
Response IP 146.59.252.180
Found Yes
Hash 92f65ace2be40fed164430e62bcde19be08c1019152349575e96dd19f3c725c2
SimHash e2126549ecd7

Groups

orthogaffe

Rule Path
Disallow /

ubicrawler

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

gsa-crawler

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

wget

Rule Path
Disallow /

grub-client

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

npbot

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

cncdialer

Rule Path
Disallow /

maxthon

Rule Path
Disallow /

mj12bot

Rule Path
Disallow /

slurp

Rule Path
Disallow /

*

Rule Path
Disallow /wp-content/
Disallow /wp-admin/
Disallow /wp-includes/
Disallow /wpi/
Disallow /trackback/
Disallow /*/*/*/feed.xml
Allow /retro/*
Disallow /retro
Disallow /login.php/
Disallow /frontend.php/
Disallow /api/1.0/migration
Disallow /server
Disallow /queue
Disallow /mobile.php/
Disallow /app.php/
Disallow /main.php/
Disallow /approve
Disallow /duplicate
Disallow /1018282
Disallow /api/
Disallow /c/
Disallow /preview-main/*
Disallow /morepostcomments
Disallow /offtopic
Disallow /p/
Disallow /pda
Disallow /tracker
Disallow /clubcampusparty
Disallow /entraenatrix
Disallow /espaciohpultrabook
Disallow /espaciohtcone
Disallow /espaciohuawei
Disallow /espaciolgseriex
Disallow /espaciolumia
Disallow /espacionokia
Disallow /espaciotecnologiasford
Disallow /espaciotoshiba
Disallow /lgmobile
Disallow /espaciovisa
Disallow /movistartv
Disallow /mundogalaxy
Disallow /nuevoestilodeti
Disallow /philipssmarttv
Disallow /tecnologiakia
Disallow /vivephilipstv
Disallow /vodafoneadslafondo
Disallow /wishlistpremiosxataka
Disallow /.well-known/amphtml/apikey.pub
Disallow /expertos/respuestas/*
Disallow /usuario/*
Disallow /busqueda?
Disallow /search?q=
Disallow /frontend_dev.php/

Other Records

Field Value
sitemap https://www-xataka-com.nproxy.org/sitemap_news.xml
sitemap https://www-xataka-com.nproxy.org/club/sitemap.xml
sitemap https://www-xataka-com.nproxy.org/sitemap_index.xml

Comments

  • robots.txt
  • Crawlers that are kind enough to obey, but which we'd rather not have
  • unless they're feeding search engines.
  • Some bots are known to be trouble, particularly those designed to copy
  • entire sites. Please obey robots.txt.
  • Sorry, wget in its recursive mode is a frequent problem.
  • Please read the man page and use it properly; there is a
  • --wait option you can use to set the delay between hits,
  • for instance.
  • The 'grub' distributed client has been *very* poorly behaved.
  • Doesn't follow robots.txt anyway, but...
  • Hits many times per second, not acceptable
  • http://www.nameprotect.com/botinfo.html
  • A capture bot, downloads gazillions of pages with no public benefit
  • http://www.webreaper.net/
  • Disallow: /redirect
  • Disallow: /la-cacharreria/search*?*