sedici.unlp.edu.ar
robots.txt

Robots Exclusion Standard data for sedici.unlp.edu.ar

Resource Scan

Scan Details

Site Domain sedici.unlp.edu.ar
Base Domain unlp.edu.ar
Scan Status Ok
Last Scan2024-11-03T09:20:22+00:00
Next Scan 2024-12-03T09:20:22+00:00

Last Scan

Scanned2024-11-03T09:20:22+00:00
URL https://sedici.unlp.edu.ar/robots.txt
Domain IPs 163.10.34.147
Response IP 163.10.34.147
Found Yes
Hash b03f45a92a3ad93844253b7efda4eeed6baae68b1a325d68c81e3f794f16d079
SimHash 87185dd9e4f7

Groups

*

Rule Path
Allow /
Disallow /themes
Disallow /browse
Disallow /*/browse
Disallow /community-list
Disallow /discover
Disallow /*/discover
Disallow /search-filter
Disallow /*/search-filter

Other Records

Field Value
crawl-delay 8

blexbot

Rule Path
Disallow /

semrushbot

Rule Path
Disallow /

mediapartners-google*

Rule Path
Disallow /

ubicrawler

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

fast

Rule Path
Disallow /

grub-client

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

npbot

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

Other Records

Field Value
sitemap https://sedici.unlp.edu.ar/sitemap

Comments

  • No queremos crawling de estos bots 3296
  • Section for misbehaving bots
  • The following directives to block specific robots were borrowed from Wikipedia's robots.txt
  • advertising-related bots:
  • Crawlers that are kind enough to obey, but which we'd rather not have
  • unless they're feeding search engines.
  • Some bots are known to be trouble, particularly those designed to copy
  • entire sites. Please obey robots.txt.
  • Misbehaving: requests much too fast:
  • If your DSpace is going down because of someone using recursive wget,
  • you can activate the following rule.
  • If your own faculty is bringing down your dspace with recursive wget,
  • you can advise them to use the --wait option to set the delay between hits.
  • User-agent: wget
  • Disallow: /
  • The 'grub' distributed client has been *very* poorly behaved.
  • Doesn't follow robots.txt anyway, but...
  • Hits many times per second, not acceptable
  • http://www.nameprotect.com/botinfo.html
  • A capture bot, downloads gazillions of pages with no public benefit
  • http://www.webreaper.net/