alimentoslaxantes.com
robots.txt

Robots Exclusion Standard data for alimentoslaxantes.com

Resource Scan

Scan Details

Site Domain alimentoslaxantes.com
Base Domain alimentoslaxantes.com
Scan Status Ok
Last Scan2024-09-18T03:51:53+00:00
Next Scan 2024-09-25T03:51:53+00:00

Last Scan

Scanned2024-09-18T03:51:53+00:00
URL https://alimentoslaxantes.com/robots.txt
Domain IPs 85.10.192.89
Response IP 85.10.192.89
Found Yes
Hash 87d0c5ab8911de5c09dc9b515fa1f9698b6022131c399485cff9f47be519019f
SimHash a99b58b3ccd5

Groups

googlebot

Rule Path
Allow /*.css$
Allow /*.js$

noxtrumbot

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 20

msnbot

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 20

slurp

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 20

msiecrawler

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

libwww

Rule Path
Disallow /

orthogaffe

Rule Path
Disallow /

ubicrawler

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

wget

Rule Path
Disallow /

grub-client

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

npbot

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

Other Records

Field Value
sitemap https://alimentoslaxantes.com/sitemap.xml

Comments

  • User-agent: *
  • Sitemap permitido, búsquedas no.
  • Disallow: /?s=
  • Disallow: /search
  • Previene problemas de recursos bloqueados en Google Webmaster Tools
  • Ralentizamos algunos bots que se suelen volver locos
  • Bloqueo de bots y crawlers poco utiles