tirlanfarmlife.com
robots.txt

Robots Exclusion Standard data for tirlanfarmlife.com

Resource Scan

Scan Details

Site Domain tirlanfarmlife.com
Base Domain tirlanfarmlife.com
Scan Status Ok
Last Scan2024-09-29T03:19:10+00:00
Next Scan 2024-10-29T03:19:10+00:00

Last Scan

Scanned2024-09-29T03:19:10+00:00
URL https://tirlanfarmlife.com/robots.txt
Redirect https://www.tirlanfarmlife.com:443/robots.txt
Redirect Domain www.tirlanfarmlife.com
Redirect Base tirlanfarmlife.com
Domain IPs 104.18.20.228, 104.18.21.228, 2606:4700::6812:14e4, 2606:4700::6812:15e4
Redirect IPs 104.18.20.228, 104.18.21.228, 2606:4700::6812:14e4, 2606:4700::6812:15e4
Response IP 104.18.20.228
Found Yes
Hash 1b9ad9d2dc3e0ee340bd4fe4f6964b6455cf98f3e281fd2b188df3cdc8d59239
SimHash dad4559bcce0

Groups

*

Rule Path
Disallow /cart
Disallow /checkout
Disallow /my-account
Disallow /my-company
Disallow /staff

Other Records

Field Value Comment
crawl-delay 10 10 seconds between page requests

cazoodlebot

Rule Path
Disallow /

mj12bot

Rule Path
Disallow /

dotbot/1.0

Rule Path
Disallow /

gigabot

Rule Path
Disallow /

deepcrawl

Rule Path
Disallow /

seznambot

Rule Path
Disallow /

ahrefsbot

Rule Path
Disallow /

seokicks-robot

Rule Path
Disallow /

sistrix crawler

Rule Path
Disallow /

uptimerobot/2.0

Rule Path
Disallow /

ezooms robot

Rule Path
Disallow /

perl lwp

Rule Path
Disallow /

blexbot

Rule Path
Disallow /

netestate ne crawler (+http://www.website-datenbank.de/)

Rule Path
Disallow /

wiseguys robot

Rule Path
Disallow /

turnitin robot

Rule Path
Disallow /

turnitinbot

Rule Path
Disallow /

turnitin bot

Rule Path
Disallow /

turnitinbot/3.0 (http://www.turnitin.com/robot/crawlerinfo.html)

Rule Path
Disallow /

turnitinbot/3.0

Rule Path
Disallow /

heritrix

Rule Path
Disallow /

pimonster

Rule Path
Disallow /

pimonster

Rule Path
Disallow /

eccp/1.0 (search@eniro.com)

Rule Path
Disallow /

baiduspider
baiduspider-video
baiduspider-image
mozilla/5.0 (compatible; baiduspider/2.0; +http://www.baidu.com/search/spider.html)
mozilla/5.0 (compatible; baiduspider/3.0; +http://www.baidu.com/search/spider.html)
mozilla/5.0 (compatible; baiduspider/4.0; +http://www.baidu.com/search/spider.html)
mozilla/5.0 (compatible; baiduspider/5.0; +http://www.baidu.com/search/spider.html)
baiduspider/2.0
baiduspider/3.0
baiduspider/4.0
baiduspider/5.0

Rule Path
Disallow /

sogou spider

Rule Path
Disallow /

youdaobot

Rule Path
Disallow /

gsa-crawler (enterprise; t4-knhh62cdkc2w3; gsa_manage@nikon-sys.co.jp)

Rule Path
Disallow /

megaindex.ru/2.0

Rule Path
Disallow /

megaindex.ru

Rule Path
Disallow /

megaindex.ru

Rule Path
Disallow /

mail.ru_bot/2.0

Rule Path
Disallow /

mail.ru

Rule Path
Disallow /

mail.ru_bot/2.0; +http://go.mail.ru/help/robots

Rule Path
Disallow /

mj12bot

Rule Path
Disallow /

mj12bot/v1.4.3

Rule Path
Disallow /

ubicrawler

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

twiceler

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

spock

Rule Path
Disallow /

omniexplorer_bot

Rule Path
Disallow /

becomebot

Rule Path
Disallow /

geniebot

Rule Path
Disallow /

dotbot

Rule Path
Disallow /

mlbot

Rule Path
Disallow /

linguee bot

Rule Path
Disallow /

aihitbot

Rule Path
Disallow /

exabot

Rule Path
Disallow /

jyxobot

Rule Path
Disallow /

magent

Rule Path
Disallow /

speedy spider

Rule Path
Disallow /

shopwiki

Rule Path
Disallow /

huasai

Rule Path
Disallow /

datacha0s

Rule Path
Disallow /

baiduspider

Rule Path
Disallow /

atomic_email_hunter

Rule Path
Disallow /

mp3bot

Rule Path
Disallow /

winhttp

Rule Path
Disallow /

betabot

Rule Path
Disallow /

core-project

Rule Path
Disallow /

panscient.com

Rule Path
Disallow /

libwww-perl

Rule Path
Disallow /

Comments

  • For all robots
  • Block access to specific groups of pages
  • Allow search crawlers to discover the sitemap
  • Sitemap: /sitemap.xml
  • Block CazoodleBot as it does not present correct accept content headers
  • Block MJ12bot as it is just noise
  • Block dotbot as it cannot parse base urls properly
  • Block Gigabot
  • Block Ahrefs
  • Block SEOkicks
  • Block SISTRIX
  • Block Uptime robot
  • Block Ezooms Robot
  • Block Perl LWP
  • Block BlexBot
  • Block netEstate NE Crawler (+http://www.website-datenbank.de/)
  • Block WiseGuys Robot
  • Block Turnitin Robot
  • Block Heritrix
  • Block pricepi
  • Block Searchmetrics Bot
  • User-agent: SearchmetricsBot
  • Disallow: /
  • Block Eniro
  • Block Baidu
  • Block SoGou
  • Block Youdao
  • Block Nikon JP Crawler
  • Block MegaIndex.ru
  • Crawlers that are kind enough to obey, but which we'd rather not have unless they're feeding search engines.
  • Some bots are known to be trouble, particularly those designed to copy
  • entire sites or download them for offline viewing. Please obey robots.txt.

Warnings

  • 4 invalid lines.
  • `request-rate` is not a known field.
  • `visit-time` is not a known field.