artpool.hu
robots.txt

Robots Exclusion Standard data for artpool.hu

Resource Scan

Scan Details

Site Domain artpool.hu
Base Domain artpool.hu
Scan Status Ok
Last Scan2024-09-19T14:54:16+00:00
Next Scan 2024-10-19T14:54:16+00:00

Last Scan

Scanned2024-09-19T14:54:16+00:00
URL https://artpool.hu/robots.txt
Domain IPs 165.22.81.222
Response IP 165.22.81.222
Found Yes
Hash 16c73512d936bd4541fb205613adacd92781eae1bab714145fc0115114cb9723
SimHash 5354d1db6331

Groups

*

Rule Path
Disallow /*pmin
Disallow /*pmax
Disallow /*cart
Disallow /*srule
Disallow /*account
Disallow /*checkout
Disallow /*view%3Dgrid
Disallow /*view%3Dlist
Disallow /*quickview
Disallow /*search
Disallow /*wishlist
Disallow /*shipping-tracking
Disallow /*contentsearch
Disallow /*search-ajax
Disallow /*payment-details
Disallow /*add-payment
Disallow /*order
Disallow /*newsletter
Disallow /*login
Disallow /*logout
Disallow /*404
Disallow /*giftregistry
Disallow /*error
Disallow /*offline
Disallow /*help
Disallow /*compare
Disallow /*cart
Disallow /*revieworder
Disallow /*billing
Disallow /*addressbook
Disallow /*add-address
Disallow /*register
Disallow /*profile

googlebot-image

Rule Path
Allow /

yandexbot

Rule Path
Disallow /

pinterest

Rule Path
Disallow /

ahrefsbot

Rule Path
Disallow /

seokicks-robot

Rule Path
Disallow /

sistrix crawler

Rule Path
Disallow /

uptimerobot/2.0

Rule Path
Disallow /

ezooms robot

Rule Path
Disallow /

perl lwp

Rule Path
Disallow /

blexbot

Rule Path
Disallow /

netestate ne crawler (+http://www.website-datenbank.de/)

Rule Path
Disallow /

wiseguys robot

Rule Path
Disallow /

turnitin robot

Rule Path
Disallow /

turnitinbot

Rule Path
Disallow /

turnitin bot

Rule Path
Disallow /

turnitinbot/3.0 (http://www.turnitin.com/robot/crawlerinfo.html)

Rule Path
Disallow /

turnitinbot/3.0

Rule Path
Disallow /

heritrix

Rule Path
Disallow /

pimonster

Rule Path
Disallow /

pimonster

Rule Path
Disallow /

eccp/1.0 (search@eniro.com)

Rule Path
Disallow /

baiduspider
baiduspider-video
baiduspider-image
mozilla/5.0 (compatible; baiduspider/2.0; +http://www.baidu.com/search/spider.html)
mozilla/5.0 (compatible; baiduspider/3.0; +http://www.baidu.com/search/spider.html)
mozilla/5.0 (compatible; baiduspider/4.0; +http://www.baidu.com/search/spider.html)
mozilla/5.0 (compatible; baiduspider/5.0; +http://www.baidu.com/search/spider.html)
baiduspider/2.0
baiduspider/3.0
baiduspider/4.0
baiduspider/5.0

Rule Path
Disallow /

sogou spider

Rule Path
Disallow /

youdaobot

Rule Path
Disallow /

gsa-crawler (enterprise; t4-knhh62cdkc2w3; gsa_manage@nikon-sys.co.jp)

Rule Path
Disallow /

megaindex.ru/2.0

Rule Path
Disallow /

megaindex.ru

Rule Path
Disallow /

megaindex.ru

Rule Path
Disallow /

mail.ru_bot/2.0

Rule Path
Disallow /

mail.ru

Rule Path
Disallow /

mail.ru_bot/2.0; +http://go.mail.ru/help/robots

Rule Path
Disallow /

mj12bot

Rule Path
Disallow /

mj12bot/v1.4.3

Rule Path
Disallow /

ubicrawler

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

twiceler

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

nutch

Rule Path
Disallow /

spock

Rule Path
Disallow /

omniexplorer_bot

Rule Path
Disallow /

becomebot

Rule Path
Disallow /

geniebot

Rule Path
Disallow /

dotbot

Rule Path
Disallow /

mlbot

Rule Path
Disallow /

linguee bot

Rule Path
Disallow /

aihitbot

Rule Path
Disallow /

exabot

Rule Path
Disallow /

sbider/nutch

Rule Path
Disallow /

jyxobot

Rule Path
Disallow /

magent

Rule Path
Disallow /

speedy spider

Rule Path
Disallow /

shopwiki

Rule Path
Disallow /

huasai

Rule Path
Disallow /

datacha0s

Rule Path
Disallow /

baiduspider

Rule Path
Disallow /

atomic_email_hunter

Rule Path
Disallow /

mp3bot

Rule Path
Disallow /

winhttp

Rule Path
Disallow /

betabot

Rule Path
Disallow /

core-project

Rule Path
Disallow /

panscient.com

Rule Path
Disallow /

java

Rule Path
Disallow /

libwww-perl

Rule Path
Disallow /

Comments

  • Google Image Crawler Setup - having crawler-specific sections makes it ignore generic e.g *
  • User-agent: Googlebot
  • Disallow:
  • Yandex tends to be rather aggressive, may be worth keeping them at arms lenght
  • Crawlers Setup
  • User-agent: *
  • Block Ahrefs
  • Block SEOkicks
  • Block SISTRIX
  • Block Uptime robot
  • Block Ezooms Robot
  • Block Perl LWP
  • Block BlexBot
  • Block netEstate NE Crawler (+http://www.website-datenbank.de/)
  • Block WiseGuys Robot
  • Block Turnitin Robot
  • Block Heritrix
  • Block pricepi
  • Block Searchmetrics Bot
  • User-agent: SearchmetricsBot
  • Disallow: /
  • Block Eniro
  • Block Baidu
  • Block SoGou
  • Block Youdao
  • Block Nikon JP Crawler
  • Block MegaIndex.ru
  • Crawlers that are kind enough to obey, but which we'd rather not have
  • unless they're feeding search engines.
  • Some bots are known to be trouble, particularly those designed to copy
  • entire sites or download them for offline viewing. Please obey robots.txt.

Warnings

  • 4 invalid lines.
  • `noindex` is not a known field.