megaredil.com
robots.txt

Robots Exclusion Standard data for megaredil.com

Resource Scan

Scan Details

Site Domain megaredil.com
Base Domain megaredil.com
Scan Status Failed
Failure StageFetching resource.
Failure ReasonServer returned a client error.
Last Scan2024-04-20T06:19:04+00:00
Next Scan 2024-07-19T06:19:04+00:00

Last Successful Scan

Scanned2023-09-23T02:58:24+00:00
URL https://megaredil.com/robots.txt
Domain IPs 104.21.37.15, 172.67.202.105, 2606:4700:3030::ac43:ca69, 2606:4700:3034::6815:250f
Response IP 104.21.37.15
Found Yes
Hash c6acf18b19091411793fd6266cdf90496f0498805e727d9effb07c63363a6284
SimHash 42dcd113e2a0

Groups

*

Rule Path
Disallow /*.pdf$
Disallow /*.docx$
Disallow /*/extranet/*
Disallow /*/dashboard/*
Disallow /*/admin/*
Disallow /*/affiliates/*
Disallow /*/admins/*
Disallow /*/users/*
Disallow /*/profile/*
Disallow /*/opinions/new
Disallow /*/productos/quick/*
Disallow /*/productos/search

googlebot-image

Rule Path
Allow /

googlebot

Rule Path
Allow /

yandexbot

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 20

pinterest

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 1

ahrefsbot

Rule Path
Disallow /

seokicks-robot

Rule Path
Disallow /

sistrix crawler

Rule Path
Disallow /

uptimerobot/2.0

Rule Path
Disallow /

ezooms robot

Rule Path
Disallow /

perl lwp

Rule Path
Disallow /

blexbot

Rule Path
Disallow /

netestate ne crawler (+http://www.website-datenbank.de/)

Rule Path
Disallow /

wiseguys robot

Rule Path
Disallow /

turnitin robot

Rule Path
Disallow /

turnitinbot

Rule Path
Disallow /

turnitin bot

Rule Path
Disallow /

turnitinbot/3.0 (http://www.turnitin.com/robot/crawlerinfo.html)

Rule Path
Disallow /

turnitinbot/3.0

Rule Path
Disallow /

heritrix

Rule Path
Disallow /

pimonster

Rule Path
Disallow /

pimonster

Rule Path
Disallow /

eccp/1.0 (search@eniro.com)

Rule Path
Disallow /

baiduspider
baiduspider-video
baiduspider-image
mozilla/5.0 (compatible; baiduspider/2.0; +http://www.baidu.com/search/spider.html)
mozilla/5.0 (compatible; baiduspider/3.0; +http://www.baidu.com/search/spider.html)
mozilla/5.0 (compatible; baiduspider/4.0; +http://www.baidu.com/search/spider.html)
mozilla/5.0 (compatible; baiduspider/5.0; +http://www.baidu.com/search/spider.html)
baiduspider/2.0
baiduspider/3.0
baiduspider/4.0
baiduspider/5.0

Rule Path
Disallow /

sogou spider

Rule Path
Disallow /

youdaobot

Rule Path
Disallow /

gsa-crawler (enterprise; t4-knhh62cdkc2w3; gsa_manage@nikon-sys.co.jp)

Rule Path
Disallow /

megaindex.ru/2.0

Rule Path
Disallow /

megaindex.ru

Rule Path
Disallow /

megaindex.ru

Rule Path
Disallow /

mail.ru_bot/2.0

Rule Path
Disallow /

mail.ru

Rule Path
Disallow /

mail.ru_bot/2.0; +http://go.mail.ru/help/robots

Rule Path
Disallow /

mj12bot

Rule Path
Disallow /

mj12bot/v1.4.3

Rule Path
Disallow /

Other Records

Field Value
crawl-delay 30

ubicrawler

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

twiceler

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

nutch

Rule Path
Disallow /

spock

Rule Path
Disallow /

omniexplorer_bot

Rule Path
Disallow /

becomebot

Rule Path
Disallow /

geniebot

Rule Path
Disallow /

dotbot

Rule Path
Disallow /

mlbot

Rule Path
Disallow /

linguee bot

Rule Path
Disallow /

aihitbot

Rule Path
Disallow /

exabot

Rule Path
Disallow /

sbider/nutch

Rule Path
Disallow /

jyxobot

Rule Path
Disallow /

magent

Rule Path
Disallow /

speedy spider

Rule Path
Disallow /

shopwiki

Rule Path
Disallow /

huasai

Rule Path
Disallow /

datacha0s

Rule Path
Disallow /

atomic_email_hunter

Rule Path
Disallow /

mp3bot

Rule Path
Disallow /

winhttp

Rule Path
Disallow /

aspiegelbot

Rule Path
Disallow /

petalbot

Rule Path
Disallow /

betabot

Rule Path
Disallow /

core-project

Rule Path
Disallow /

panscient.com

Rule Path
Disallow /

java

Rule Path
Disallow /

libwww-perl

Rule Path
Disallow /

Other Records

Field Value
sitemap https://megaredil.com/sitemap.xml?locale=es

Comments

  • Google Image Crawler Setup - having crawler-specific sections makes it ignore generic e.g *
  • Yandex tends to be rather aggressive, may be worth keeping them at arms lenght
  • Block Ahrefs
  • Block SEOkicks
  • Block SISTRIX
  • Block Uptime robot
  • Block Ezooms Robot
  • Block Perl LWP
  • Block BlexBot
  • Block netEstate NE Crawler (+http://www.website-datenbank.de/)
  • Block WiseGuys Robot
  • Block Turnitin Robot
  • Block Heritrix
  • Block pricepi
  • Block Searchmetrics Bot
  • User-agent: SearchmetricsBot
  • Disallow: /
  • Block Eniro
  • Block Baidu
  • Block SoGou
  • Block Youdao
  • Block Nikon JP Crawler
  • Block MegaIndex.ru
  • Sitemap files

Warnings

  • 4 invalid lines.