universitaetssport.unibas.ch
robots.txt

Robots Exclusion Standard data for universitaetssport.unibas.ch

Resource Scan

Scan Details

Site Domain universitaetssport.unibas.ch
Base Domain unibas.ch
Scan Status Ok
Last Scan2024-11-03T12:21:55+00:00
Next Scan 2024-12-03T12:21:55+00:00

Last Scan

Scanned2024-11-03T12:21:55+00:00
URL https://universitaetssport.unibas.ch/robots.txt
Domain IPs 131.152.254.220
Response IP 131.152.254.220
Found Yes
Hash 3e07b8e33296c3151e965c0c0a7b53edbb0dcda444870b209466877916f6d1c6
SimHash 7a54d3916031

Groups

mediapartners-google

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 10

googlebot

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 5

adsbot-google

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 10

googlebot-mobile

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 5

bingbot

Rule Path
Disallow /captcha*
Disallow /captcha/
Disallow /captcha/*

Other Records

Field Value
crawl-delay 10

msnbot

Rule Path
Disallow /captcha*
Disallow /captcha/
Disallow /captcha/*

Other Records

Field Value
crawl-delay 10

msnbot/bingbot

Rule Path
Disallow /captcha*
Disallow /captcha/
Disallow /captcha/*

Other Records

Field Value
crawl-delay 10

shopwiki

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 120

twengabot

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 60

twitterbot

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 30

slurp

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 15

yahoo! slurp

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 15

googlebot-image

Rule Path
Allow /

Other Records

Field Value
crawl-delay 10

yandexbot

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 20

pinterest

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 10

ahrefsbot

Rule Path
Disallow /

seokicks-robot

Rule Path
Disallow /

sistrix crawler

Rule Path
Disallow /

uptimerobot/2.0

Rule Path
Disallow /

ut-dorkbot/1.0

Rule Path
Disallow /

ezooms robot

Rule Path
Disallow /

perl lwp

Rule Path
Disallow /

blexbot

Rule Path
Disallow /

semrushbot

Rule Path
Disallow /

netestate ne crawler (+http://www.website-datenbank.de/)

Rule Path
Disallow /

wiseguys robot

Rule Path
Disallow /

turnitin robot

Rule Path
Disallow /

turnitinbot

Rule Path
Disallow /

turnitin bot

Rule Path
Disallow /

turnitinbot/3.0 (http://www.turnitin.com/robot/crawlerinfo.html)

Rule Path
Disallow /

turnitinbot/3.0

Rule Path
Disallow /

heritrix

Rule Path
Disallow /

pimonster

Rule Path
Disallow /

pimonster

Rule Path
Disallow /

eccp/1.0 (search@eniro.com)

Rule Path
Disallow /

baiduspider
baiduspider-video
baiduspider-image
mozilla/5.0 (compatible; baiduspider/2.0; +http://www.baidu.com/search/spider.html)
mozilla/5.0 (compatible; baiduspider/3.0; +http://www.baidu.com/search/spider.html)
mozilla/5.0 (compatible; baiduspider/4.0; +http://www.baidu.com/search/spider.html)
mozilla/5.0 (compatible; baiduspider/5.0; +http://www.baidu.com/search/spider.html)
baiduspider/2.0
baiduspider/3.0
baiduspider/4.0
baiduspider/5.0

Rule Path
Disallow /

sogou spider

Rule Path
Disallow /

youdaobot

Rule Path
Disallow /

gsa-crawler (enterprise; t4-knhh62cdkc2w3; gsa_manage@nikon-sys.co.jp)

Rule Path
Disallow /

megaindex.ru/2.0

Rule Path
Disallow /

megaindex.ru

Rule Path
Disallow /

megaindex.ru

Rule Path
Disallow /

mail.ru_bot/2.0

Rule Path
Disallow /

mail.ru

Rule Path
Disallow /

mail.ru_bot/2.0; +http://go.mail.ru/help/robots

Rule Path
Disallow /

mj12bot

Rule Path
Disallow /

mj12bot/v1.4.3

Rule Path
Disallow /

Other Records

Field Value
crawl-delay 30

ubicrawler

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

twiceler

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

nutch

Rule Path
Disallow /

spock

Rule Path
Disallow /

omniexplorer_bot

Rule Path
Disallow /

becomebot

Rule Path
Disallow /

geniebot

Rule Path
Disallow /

dotbot

Rule Path
Disallow /

mlbot

Rule Path
Disallow /

linguee bot

Rule Path
Disallow /

aihitbot

Rule Path
Disallow /

exabot

Rule Path
Disallow /

sbider/nutch

Rule Path
Disallow /

jyxobot

Rule Path
Disallow /

magent

Rule Path
Disallow /

speedy spider

Rule Path
Disallow /

shopwiki

Rule Path
Disallow /

huasai

Rule Path
Disallow /

datacha0s

Rule Path
Disallow /

baiduspider

Rule Path
Disallow /

atomic_email_hunter

Rule Path
Disallow /

mp3bot

Rule Path
Disallow /

winhttp

Rule Path
Disallow /

betabot

Rule Path
Disallow /

core-project

Rule Path
Disallow /

panscient.com

Rule Path
Disallow /

java

Rule Path
Disallow /

libwww-perl

Rule Path
Disallow /

*

Rule Path
Disallow /captcha/
Disallow /captcha*
Disallow /captcha/*

Other Records

Field Value
crawl-delay 10

Comments

  • Delay search-engine bots
  • GOOGLE BOTS
  • MICROSOFT BOTS
  • ShopWiki BOTS
  • Twenga BOTS
  • Twitter BOTS
  • YAHOO BOTS
  • Google Image Crawler Setup - having crawler-specific sections makes it ignore generic e.g *
  • Yandex tends to be rather aggressive, may be worth keeping them at arms lenght
  • Crawlers Setup
  • User-agent: *
  • Block Ahrefs
  • Block SEOkicks
  • Block SISTRIX
  • Block Uptime robot
  • Block Dorkbot
  • Block Ezooms Robot
  • Block Perl LWP
  • Block BlexBot
  • Block SemrushBot
  • Block netEstate NE Crawler (+http://www.website-datenbank.de/)
  • Block WiseGuys Robot
  • Block Turnitin Robot
  • Block Heritrix
  • Block pricepi
  • Block Searchmetrics Bot
  • User-agent: SearchmetricsBot
  • Disallow: /
  • Block Eniro
  • Block Baidu
  • Block SoGou
  • Block Youdao
  • Block Nikon JP Crawler
  • Block MegaIndex.ru
  • unless they're feeding search engines.
  • Some bots are known to be trouble, particularly those designed to copy
  • entire sites or download them for offline viewing. Please obey robots.txt.
  • Directories
  • Request-rate: defines pages/seconds to be crawled ratio. 1/20 would be 1 page in every 20 second.
  • Crawl-delay: defines howmany seconds to wait after each succesful crawling.
  • Visit-time: you can define between which hours you want your pages to be crawled. Example usage is: 0100-0330 which means that pages will be indexed between 01:00 AM - 03:30 AM GMT.

Warnings

  • 5 invalid lines.
  • `request-rate` is not a known field.
  • `visit-time` is not a known field.