www.research-collection.ethz.ch
robots.txt

Robots Exclusion Standard data for www.research-collection.ethz.ch

Resource Scan

Scan Details

Site Domain www.research-collection.ethz.ch
Base Domain ethz.ch
Scan Status Failed
Failure StageFetching resource.
Failure ReasonServer returned a client error.
Last Scan2025-05-10T03:38:10+00:00
Next Scan 2025-07-09T03:38:10+00:00

Last Successful Scan

Scanned2025-02-17T01:08:22+00:00
URL https://www.research-collection.ethz.ch/robots.txt
Domain IPs 129.132.8.151
Response IP 129.132.8.151
Found Yes
Hash 3f857615f8dcd307baaf075c3bdce68b72185359278eebc61e1705e1feb667a5
SimHash b09ccf53c5b4

Groups

*

Rule Path
Disallow /discover
Disallow /search-filter
Disallow /ds2/stream

Other Records

Field Value
crawl-delay 10

claudebot

Rule Path
Disallow /

gptbot

Rule Path
Disallow /

meta-externalagent

Rule Path
Disallow /

googleother

Rule Path
Disallow /

perplexitybot

Rule Path
Disallow /

bytespider

Rule Path
Disallow /

mediapartners-google*

Rule Path
Disallow /

trendictionbot

Rule Path
Disallow /

pdf drive crawler

Rule Path
Disallow /

skyworkspider

Rule Path
Disallow /

semrushbot

Rule Path
Disallow /

dotbot

Rule Path
Disallow /

ahrefsbot

Rule Path
Disallow /

ubicrawler

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

fast

Rule Path
Disallow /

grub-client

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

npbot

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

semrushbot

Rule Path
Disallow /

semrushbot-sa

Rule Path
Disallow /

turnitinbot

Rule Path
Disallow /

piplbot

Rule Path
Disallow /

Other Records

Field Value
sitemap https://www.research-collection.ethz.ch/sitemap
sitemap https://www.research-collection.ethz.ch/htmlmap

Comments

  • The FULL URL to the DSpace sitemaps
  • The https://www.research-collection.ethz.ch will be auto-filled with the value in dspace.cfg
  • XML sitemap is listed first as it is preferred by most search engines
  • Default Access Group
  • (NOTE: blank lines are not allowable in a group record)
  • Disable access to Discovery search and filters
  • Optionally uncomment the following line ONLY if sitemaps are working
  • and you have verified that your site is being indexed correctly.
  • Disallow: /browse
  • Disallow: /handle/20.500.11850/*/browse
  • If you have configured DSpace (Solr-based) Statistics to be publicly
  • accessible, then you may not want this content to be indexed
  • Disallow: /statistics
  • You also may wish to disallow access to the following paths, in order
  • to stop web spiders from accessing user-based content
  • Disallow: /contact
  • Disallow: /feedback
  • Disallow: /forgot
  • Disallow: /login
  • Disallow: /register
  • Section for AI-Crawlers
  • The following directives to block bots that are used to build AI-Models and put too much of a strain on the RC
  • AI-Crawler by Facebook
  • This Google Crawler isn't used for search but internal use (probably including AI)
  • Section for misbehaving bots
  • The following directives to block specific robots were borrowed from Wikipedia's robots.txt
  • advertising-related bots:
  • Crawlers that are kind enough to obey, but which we'd rather not have
  • unless they're feeding search engines.
  • Some bots are known to be trouble, particularly those designed to copy
  • entire sites. Please obey robots.txt.
  • Misbehaving: requests much too fast:
  • If your DSpace is going down because of someone using recursive wget,
  • you can activate the following rule.
  • If your own faculty is bringing down your dspace with recursive wget,
  • you can advise them to use the --wait option to set the delay between hits.
  • User-agent: wget
  • Disallow: /
  • The 'grub' distributed client has been *very* poorly behaved.
  • Doesn't follow robots.txt anyway, but...
  • Hits many times per second, not acceptable
  • http://www.nameprotect.com/botinfo.html
  • A capture bot, downloads gazillions of pages with no public benefit
  • http://www.webreaper.net/
  • A capture bot, hits with too high frequency, not acceptable
  • https://www.semrush.com/bot/
  • A capture bot, hits with too high frequency, not acceptable
  • A commercial identity website, not needed