alfi.lu
robots.txt

Robots Exclusion Standard data for alfi.lu

Resource Scan

Scan Details

Site Domain alfi.lu
Base Domain alfi.lu
Scan Status Ok
Last Scan2024-10-20T06:34:35+00:00
Next Scan 2024-11-19T06:34:35+00:00

Last Scan

Scanned2024-10-20T06:34:35+00:00
URL https://alfi.lu/robots.txt
Redirect https://www.alfi.lu/robots.txt
Redirect Domain www.alfi.lu
Redirect Base alfi.lu
Domain IPs 185.3.45.48
Redirect IPs 185.3.45.48
Response IP 185.3.45.48
Found Yes
Hash 833ba159818ffab601919c798f14169ddf0c1bc9674e76d25aa63a059b197def
SimHash 2a31995567f0

Groups

adsbot-google
adsbot-google-mobile
adsbot-google-mobile-apps
adidxbot
applebot
applenewsbot
baiduspider
baiduspider-image
bingbot
bingpreview
bublupbot
ccbot
cliqzbot
coccoc
coccocbot-image
coccocbot-web
daumoa
dazoobot
deusu
duckduckbot
duckduckgo-favicons-bot
euripbot
exploratodo
facebot
feedly
findxbot
gooblog
googlebot
googlebot-image
googlebot-mobile
googlebot-news
googlebot-video
haosouspider
ichiro
istellabot
jikespider
lycos
mail.ru
mediapartners-google
mojeekbot
msnbot
msnbot-media
orangebot
pinterest
plukkie
qwantify
rambler
seznambot
sosospider
slurp
sogou blog
sogou inst spider
sogou news spider
sogou orion spider
sogou spider2
sogou web spider
sputnikbot
teoma
twitterbot
wotbox
yacybot
yandex
yandexmobilebot
yeti
yioopbot
yoozbot
youdaobot

Rule Path
Disallow /en-gb/error/
Disallow /error/
Disallow /en-gb/team/details/christine-grosse-strangmann

*

Rule Path
Disallow /

Other Records

Field Value
sitemap https://www.alfi.lu/sitemap.xml

Comments

  • ROBOTS.TXT
  • Alphabetically ordered whitelisting of legitimate web robots, which obey the
  • Robots Exclusion Standard (robots.txt). Each bot is shortly described in a
  • comment above the (list of) user-agent(s). Uncomment or delete bots you do
  • not wish to allow on your website / which do not need to visit your website.
  • Important: Blank lines are not allowed in the final robots.txt file!
  • Updates can be retrieved from: https://github.com/jonasjacek/robots.txt
  • This document is licensed with a CC BY-NC-SA 4.0 license.
  • Last update: 2019-03-07
  • so.com chinese search engine
  • google.com landing page quality checks
  • google.com app resource fetcher
  • bing ads bot
  • apple.com search engine
  • baidu.com chinese search engine
  • bing.com international search engine
  • bublup.com suggestion/search engine
  • commoncrawl.org open repository of web crawl data
  • cliqz.com german in-product search engine
  • coccoc.com vietnamese search engine
  • daum.net korean search engine
  • dazoo.fr french search engine
  • deusu.de german search engine
  • duckduckgo.com international privacy search engine
  • eurip.com european search engine
  • exploratodo.com latin search engine
  • facebook.com social network
  • feedly.com feed fetcher
  • findx.com european search engine
  • goo.ne.jp japanese search engine
  • google.com international search engine
  • so.com chinese search engine
  • goo.ne.jp japanese search engine
  • istella.it italian search engine
  • jike.com / chinaso.com chinese search engine
  • lycos.com & hotbot.com international search engine
  • mail.ru russian search engine
  • google.com adsense bot
  • mojeek.com search engine
  • bing.com international search engine
  • orange.com international search engine
  • pinterest.com social networtk
  • botje.nl dutch search engine
  • qwant.com french search engine
  • rambler.ru russian search engine
  • seznam.cz czech search engine
  • soso.com chinese search engine
  • yahoo.com international search engine
  • sogou.com chinese search engine
  • sputnik.ru russian search engine
  • ask.com international search engine
  • twitter.com bot
  • wotbox.com international search engine
  • yacy.net p2p search software
  • yandex.com russian search engine
  • search.naver.com south korean search engine
  • yioop.com international search engine
  • yooz.ir iranian search engine
  • youdao.com chinese search engine
  • crawling rule(s) for above bots
  • disallow all other bots
  • Add a link to the site-map. Unfortunately this must be an absolute URL.

Warnings

  • 3 invalid lines.