eluniversalhidalgo.com.mx
robots.txt

Robots Exclusion Standard data for eluniversalhidalgo.com.mx

Resource Scan

Scan Details

Site Domain eluniversalhidalgo.com.mx
Base Domain eluniversalhidalgo.com.mx
Scan Status Ok
Last Scan2025-03-17T10:17:05+00:00
Next Scan 2025-03-24T10:17:05+00:00

Last Scan

Scanned2025-03-17T10:17:05+00:00
URL https://eluniversalhidalgo.com.mx/robots.txt
Redirect https://www.eluniversalhidalgo.com.mx:443/robots.txt
Redirect Domain www.eluniversalhidalgo.com.mx
Redirect Base eluniversalhidalgo.com.mx
Domain IPs 15.197.185.232, 3.33.172.2
Redirect IPs 23.209.46.84, 23.209.46.91, 2600:1413:b000:13::b857:c184, 2600:1413:b000:13::b857:c185
Response IP 23.209.46.84
Found Yes
Hash 59b0fe456858cdc0fa736bf9429f99adddf51821982143257b70d5d4f30670bc
SimHash b8961909c565

Groups

*
googlebot
googlebot-news
googlebot-image
googlebot-video
googlebot-mobile
twitterbot

Rule Path
Disallow /files/

ubicrawler

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

wget

Rule Path
Disallow /

grub-client

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

npbot

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

Other Records

Field Value
sitemap https://www.eluniversalhidalgo.com.mx/arc/outboundfeeds/news/?outputType=xml
sitemap https://www.eluniversalhidalgo.com.mx/arc/outboundfeeds/general/?outputType=xml

Comments

  • robots.txt 2023-03-21
  • This file is to prevent the crawling and indexing of certain parts
  • of your site by web crawlers and spiders run by sites like Yahoo!
  • and Google. By telling these "robots" where not to go on your site,
  • you save bandwidth and server resources.
  • This file will be ignored unless it is at the root of your host:
  • Used: http://example.com/robots.txt
  • Ignored: http://example.com/site/robots.txt
  • For more information about the robots.txt standard, see:
  • http://www.robotstxt.org/robotstxt.html
  • la mayoria de veces causa problemas