www2.unil.ch
robots.txt

Robots Exclusion Standard data for www2.unil.ch

Resource Scan

Scan Details

Site Domain www2.unil.ch
Base Domain unil.ch
Scan Status Ok
Last Scan2025-10-20T21:30:48+00:00
Next Scan 2025-11-19T21:30:48+00:00

Last Scan

Scanned2025-10-20T21:30:48+00:00
URL https://www2.unil.ch/robots.txt
Domain IPs 192.42.183.207
Response IP 192.42.183.207
Found Yes
Hash 06d6c15e726aff7f8b32cd9bcaeef98f36b4aec4756e095d25d92fd3b759af8b
SimHash f6db5a00c3c6

Groups

*

Rule Path
Disallow /formatox/
Disallow /toto/
Disallow /softunil/
Disallow /perunil/pu2/
Disallow /perunil/oai/
Disallow /perunil/biomed/index.php/site/
Disallow /elitessuisses/ajax.php
Disallow /elitessuisses//ajax.php
Disallow /elitessuisses/personne.php

gptbot
claudebot
claude-web
ccbot
google-extended
applebot-extended
facebookbot
meta-externalagent
meta-externalfetcher
diffbot
perplexitybot
omgili
omgilibot
webzio-extended
imagesiftbot
bytespider
amazonbot
youbot
semrushbot-ocob
petalbot
velenpublicwebcrawler
turnitinbot
timpibot
oai-searchbot
icc-crawler
ai2bot
ai2bot-dolma
dataforseobot
awariobot
awariosmartbot
awariorssbot
google-cloudvertexbot
pangubot
kangaroo bot
sentibot
img2dataset
meltwater
seekr
peer39_crawler
cohere-ai
cohere-training-data-crawler
duckassistbot
scrapy

Rule Path
Disallow /

*

No rules defined. All paths allowed.

Comments

  • Block all known AI crawlers and assistants
  • from using content for training AI models.
  • Source: https://robotstxt.com/ai
  • Block any non-specified AI crawlers (e.g., new
  • or unknown bots) from using content for training
  • AI models. This directive is still experimental
  • and may not be supported by all AI crawlers.

Warnings

  • `disallowaitraining` is not a known field.