oliverhellowell.com
robots.txt

Robots Exclusion Standard data for oliverhellowell.com

Resource Scan

Scan Details

Site Domain oliverhellowell.com
Base Domain oliverhellowell.com
Scan Status Ok
Last Scan2025-10-09T09:12:53+00:00
Next Scan 2025-10-23T09:12:53+00:00

Last Scan

Scanned2025-10-09T09:12:53+00:00
URL https://oliverhellowell.com/robots.txt
Redirect https://www.oliverhellowell.com/robots.txt
Redirect Domain www.oliverhellowell.com
Redirect Base oliverhellowell.com
Domain IPs 46.17.88.201
Redirect IPs 46.17.88.201
Response IP 46.17.88.201
Found Yes
Hash c25190e962cc21d549c069a484d81f66ddcea9af7d1037e8bf116aa060b7a596
SimHash 7e343959eee7

Groups

mediapartners-google*

Rule Path
Disallow /

israbot

Rule Path
Disallow

orthogaffe

Rule Path
Disallow

ubicrawler

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

ia_archiver

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

wget

Rule Path
Disallow /

grub-client

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

npbot

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

adsbot-google
amazonbot
anthropic-ai
applebot-extended
bytespider
ccbot
chatgpt-user
claudebot
claude-web
cohere-ai
diffbot
facebookbot
friendlycrawler
google-extended
googleother
gptbot
img2dataset
omgili
omgilibot
peer39_crawler
peer39_crawler/1.0
perplexitybot
youbot

Rule Path
Disallow /

*

Rule Path
Disallow /images/
Disallow /trial/
Disallow /register/
Disallow /basket.php
Disallow /cancelled.php
Disallow /confirmation.php
Disallow /confirmed.php
Disallow /customer.php
Disallow /emptybasket.php
Disallow /logout.php
Disallow /newpassword.php
Disallow /paypaldisabled.php
Disallow /popup.php
Disallow /preview.php
Disallow /prints.php
Disallow /printtype.php
Disallow /removefrombasket.php
Disallow /shipping.php
Disallow /thankyou.php

Other Records

Field Value
crawl-delay 1

Other Records

Field Value
sitemap https://www.oliverhellowell.com/sitemap.xml

Comments

  • robots.txt for http://www.portfolioseries.co.uk/
  • taken from wikipedia.org
  • Form is as follows (empty file indicates all robots/paths allowed):
  • User-Agent: <name of user agent (* for all)>
  • Disallow: <paths to disallow (/ for all)>
  • Please note: There are a lot of pages on this site, and there are
  • some misbehaved spiders out there that go _way_ too fast. If you're
  • irresponsible, your access to the site may be blocked.
  • advertising-related bots:
  • Wikipedia work bots:
  • Crawlers that are kind enough to obey, but which we'd rather not have
  • unless they're feeding search engines.
  • Some bots are known to be trouble, particularly those designed to copy
  • entire sites. Please obey robots.txt.
  • Sorry, wget in its recursive mode is a frequent problem.
  • Please read the man page and use it properly; there is a
  • --wait option you can use to set the delay between hits,
  • for instance.
  • The 'grub' distributed client has been *very* poorly behaved.
  • Doesn't follow robots.txt anyway, but...
  • Hits many times per second, not acceptable
  • http://www.nameprotect.com/botinfo.html
  • A capture bot, downloads gazillions of pages with no public benefit
  • http://www.webreaper.net/
  • Disallow the various AI bots
  • https://github.com/ai-robots-txt/ai.robots.txt
  • Friendly, low-speed bots are welcome viewing article pages, but not
  • dynamically-generated pages please.
  • Inktomi's "Slurp" can read a minimum delay between hits; if your
  • bot supports such a thing using the 'Crawl-delay' or another
  • instruction, please let us know.