octoly.com
robots.txt

Robots Exclusion Standard data for octoly.com

Resource Scan

Scan Details

Site Domain octoly.com
Base Domain octoly.com
Scan Status Ok
Last Scan2024-07-01T00:01:37+00:00
Next Scan 2024-07-15T00:01:37+00:00

Last Scan

Scanned2024-07-01T00:01:37+00:00
URL https://www.octoly.com/robots.txt
Redirect https://app.im.skeepers.io/robots.txt
Redirect Domain app.im.skeepers.io
Redirect Base skeepers.io
Domain IPs 162.159.134.42
Redirect IPs 108.156.133.101, 108.156.133.6, 108.156.133.62, 108.156.133.97
Response IP 108.156.133.97
Found Yes
Hash b19d1aaf7486d110252d0474e15408281c599d0b4de468cc91313a7584a1e6ce
SimHash a29f39abce60

Groups

*

Rule Path
Disallow /pro/sign_up
Disallow /creators/waiting/
Disallow /c/

ubicrawler

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

wget

Rule Path
Disallow /

grub-client

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

npbot

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

psbot

Rule Path
Disallow /

exabot

Rule Path
Disallow /

speedy

Rule Path
Disallow /

dotbot

Rule Path
Disallow /

bloglines/3.1

Rule Path
Disallow /

jyxobot/1

Rule Path
Disallow /

cityreview

Rule Path
Disallow /

ahrefsbot

Rule Path
Disallow /

blexbot

Rule Path
Disallow /

mj12bot

Rule Path
Disallow /

Other Records

Field Value
sitemap https://app.im.skeepers.io/sitemap.xml

Comments

  • See http://www.robotstxt.org/wc/norobots.html for documentation on how to use the robots.txt file
  • To ban all spiders from the entire site uncomment the next two lines:
  • User-agent: *
  • Disallow: /