repository.ugm.ac.id
robots.txt

Robots Exclusion Standard data for repository.ugm.ac.id

Resource Scan

Scan Details

Site Domain repository.ugm.ac.id
Base Domain ugm.ac.id
Scan Status Failed
Failure StageFetching resource.
Failure ReasonServer returned a client error.
Last Scan2024-06-04T05:11:24+00:00
Next Scan 2024-09-02T05:11:24+00:00

Last Successful Scan

Scanned2023-07-18T03:37:54+00:00
URL https://repository.ugm.ac.id/robots.txt
Domain IPs 175.111.88.112
Response IP 175.111.88.112
Found Yes
Hash 5ac4af56b35ca81b1bbe1ccafaf1d84f0f79ad12dbda52ff95871005969e4e23
SimHash 28109d014575

Groups

*

Rule Path
Disallow /

Other Records

Field Value
crawl-delay 600

googlebot
googlebot-image
mediapartners-google
msnbot
msnbot-media
slurp
yahoo-blogs
yahoo-mmcrawler

No rules defined. All paths allowed.

Other Records

Field Value
sitemap https://repository.ugm.ac.id/sitemap.xml

Comments

  • $Id: robots.txt,v 1.9.2.1 2008/12/10 20:12:19 goba Exp $
  • robots.txt
  • This file is to prevent the crawling and indexing of certain parts
  • of your site by web crawlers and spiders run by sites like Yahoo!
  • and Google. By telling these "robots" where not to go on your site,
  • you save bandwidth and server resources.
  • This file will be ignored unless it is at the root of your host:
  • Used: http://example.com/robots.txt
  • Ignored: http://example.com/site/robots.txt
  • For more information about the robots.txt standard, see:
  • http://www.robotstxt.org/wc/robots.html
  • For syntax checking, see:
  • http://www.sxw.org.uk/computing/robots/check.html
  • disallow all
  • but allow only important bots