repo-mhs.ulm.ac.id
robots.txt

Robots Exclusion Standard data for repo-mhs.ulm.ac.id

Resource Scan

Scan Details

Site Domain repo-mhs.ulm.ac.id
Base Domain ulm.ac.id
Scan Status Ok
Last Scan2025-05-24T23:57:57+00:00
Next Scan 2025-06-23T23:57:57+00:00

Last Scan

Scanned2025-05-24T23:57:57+00:00
URL https://repo-mhs.ulm.ac.id/robots.txt
Domain IPs 104.26.8.129, 104.26.9.129, 172.67.69.88, 2606:4700:20::681a:881, 2606:4700:20::681a:981, 2606:4700:20::ac43:4558
Response IP 104.26.9.129
Found Yes
Hash 7c8d5a37724fc62da251dcac52619de1f9e012b902860af4194291bdfd80c9d3
SimHash a514571565f5

Groups

*

Rule Path
Disallow /discover
Disallow /search-filter
Disallow /search
Disallow /index.html
Disallow /scholar
Disallow /citations?
Allow /citations?user=
Disallow /citations?*cstart=
Disallow /citations?user=*%40
Disallow /citations?user=*%40
Allow /citations?view_op=list_classic_articles
Allow /citations?view_op=metrics_intro
Allow /citations?view_op=new_profile
Allow /citations?view_op=sitemap
Allow /citations?view_op=top_venues

mediapartners-google*

Rule Path
Allow /

google*

Rule Path
Allow /

ubicrawler

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

fast

Rule Path
Disallow /

grub-client

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

npbot

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

Other Records

Field Value
sitemap http://localhost:8080/xmlui/sitemap
sitemap http://localhost:8080/xmlui/htmlmap
sitemap https://repo-mhs.ulm.ac.id/sitemap
sitemap https://repo-mhs.ulm.ac.id/htmlmap
sitemap http://repo-mhs.ulm.ac.id:8080/sitemap
sitemap http://repo-mhs.ulm.ac.id:8080/htmlmap

Comments

  • The FULL URL to the DSpace sitemaps
  • The http://localhost:8080/xmlui will be auto-filled with the value in dspace.cfg
  • XML sitemap is listed first as it is preferred by most search engines
  • Default Access Group
  • (NOTE: blank lines are not allowable in a group record)
  • Disable access to Discovery search and filters
  • Optionally uncomment the following line ONLY if sitemaps are working
  • and you have verified that your site is being indexed correctly.
  • Disallow: /browse
  • Disallow: /handle/123456789/*/browse
  • If you have configured DSpace (Solr-based) Statistics to be publicly
  • accessible, then you may not want this content to be indexed
  • Disallow: /statistics
  • You also may wish to disallow access to the following paths, in order
  • to stop web spiders from accessing user-based content
  • Disallow: /contact
  • Disallow: /feedback
  • Disallow: /forgot
  • Disallow: /login
  • Disallow: /register
  • Section for misbehaving bots
  • The following directives to block specific robots were borrowed from Wikipedia's robots.txt
  • advertising-related bots:
  • Crawlers that are kind enough to obey, but which we'd rather not have
  • unless they're feeding search engines.
  • Some bots are known to be trouble, particularly those designed to copy
  • entire sites. Please obey robots.txt.
  • Misbehaving: requests much too fast:
  • If your DSpace is going down because of someone using recursive wget,
  • you can activate the following rule.
  • If your own faculty is bringing down your dspace with recursive wget,
  • you can advise them to use the --wait option to set the delay between hits.
  • User-agent: wget
  • Disallow: /
  • The 'grub' distributed client has been *very* poorly behaved.
  • Doesn't follow robots.txt anyway, but...
  • Hits many times per second, not acceptable
  • http://www.nameprotect.com/botinfo.html
  • A capture bot, downloads gazillions of pages with no public benefit
  • http://www.webreaper.net/