staleycu.com
robots.txt

Robots Exclusion Standard data for staleycu.com

Resource Scan

Scan Details

Site Domain staleycu.com
Base Domain staleycu.com
Scan Status Failed
Failure StageFetching resource.
Failure ReasonServer returned a client error.
Last Scan2025-08-28T02:04:52+00:00
Next Scan 2025-11-26T02:04:52+00:00

Last Successful Scan

Scanned2025-03-16T07:04:16+00:00
URL https://staleycu.com/robots.txt
Domain IPs 147.78.3.161
Response IP 147.78.3.161
Found Yes
Hash f4954ce02467a6a659ea015ec58e463c1734c0703dcec1e2c5d12947f5a9fabb
SimHash 329add816103

Groups

*

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 10

googlebot

Rule Path
Disallow

mediapartners-google

Rule Path
Disallow

bingbot

Rule Path
Disallow

slurp

Rule Path
Disallow

duckduckbot

Rule Path
Disallow

yandex

Rule Path
Disallow /

yeti

Rule Path
Disallow /

semrushbot

Rule Path
Disallow /

semrushbot-sa

Rule Path
Disallow /

nextgensearchbot

Rule Path
Disallow /

ia_archiver

Rule Path
Disallow /

baiduspider

Rule Path
Disallow /

picscout

Rule Path
Disallow /

mj12bot

Rule Path
Disallow /

ahrefsbot

Rule Path
Disallow /

ccbot

Rule Path
Disallow /

blexbot crawler

Rule Path
Disallow /

tineye

Rule Path
Disallow /

sogou spider

Rule Path
Disallow /

exabot

Rule Path
Disallow /

nutch

Rule Path
Disallow /

mj12bot

Rule Path
Disallow /

python-urllib

Rule Path
Disallow /

dotbot

Rule Path
Disallow /

seokicks-robot

Rule Path
Disallow /

blexbot

Rule Path
Disallow /

sistrix crawler

Rule Path
Disallow /

uptimerobot/2.0

Rule Path
Disallow /

ezooms robot

Rule Path
Disallow /

perl lwp

Rule Path
Disallow /

netestate ne crawler (+http://www.website-datenbank.de/)

Rule Path
Disallow /

wiseguys robot

Rule Path
Disallow /

turnitin robot

Rule Path
Disallow /

heritrix

Rule Path
Disallow /

pimonster

Rule Path
Disallow /

pimonster

Rule Path
Disallow /

pi-monster

Rule Path
Disallow /

eccp/1.0 (search@eniro.com)

Rule Path
Disallow /

baiduspider
baiduspider-video
baiduspider-image

Rule Path
Disallow /

psbot

Rule Path
Disallow /

youdaobot

Rule Path
Disallow /

blexbot

Rule Path
Disallow /

naverbot
yeti

Rule Path
Disallow /

psbot

Rule Path
Disallow /

zbot

Rule Path
Disallow /

vagabondo

Rule Path
Disallow /

linkwalker

Rule Path
Disallow /

xenu link sleuth

Rule Path
Disallow /

simplepie

Rule Path
Disallow /

wget

Rule Path
Disallow /

pixray-seeker

Rule Path
Disallow /

boardreader

Rule Path
Disallow /

unknown bot

Rule Path
Disallow /

*

Rule Path
Disallow /home-4-copy/
Disallow /cards/actions/skip-loan-payment-thank-you
Disallow /loans/actions/holiday-cash2
Disallow /cards/all-card-overview/platinum-card-accept/
Disallow /uploads/
Disallow /rates-up/
Disallow /loan-rates-up/
Disallow /wp-admin/
Disallow /wp-login.php
Allow /wp-admin/admin-ajax.php

Comments

  • ****************************************************************************
  • robots.txt
  • : Robots, spiders, and search engines use this file to detmine which
  • content they should *not* crawl while indexing your website.
  • : This system is called "The Robots Exclusion Standard."
  • : It is strongly encouraged to use a robots.txt validator to check
  • for valid syntax before any robots read it!
  • Examples:
  • Instruct all robots to stay out of the admin area.
  • : User-agent: *
  • : Disallow: /admin/
  • Restrict Google and MSN from indexing your images.
  • : User-agent: Googlebot
  • : Disallow: /images/
  • : User-agent: MSNBot
  • : Disallow: /images/
  • ****************************************************************************
  • Block Yandex
  • Block Yeti
  • Block SemrushBot
  • Block SemrushBot-SA
  • Block NextGenSearchBot
  • Block ia-archiver from crawling site
  • Block Baiduspider from crawling site
  • Block PicScout Crawler from crawling site
  • Block MJ12bot from crawling site
  • Block 008 from crawling site
  • Block AhrefsBot from crawling site
  • Block CCBot Crawler from crawling site
  • Block BLEXBot Crawler from crawling site
  • Block TinEye from crawling site
  • Block Sogou Spider from crawling site
  • Block Exabot from crawling site
  • Block Nutch from crawling site
  • Block MJ12bot as it is just noise
  • Block Python-urllib
  • Block dotbot
  • Block SEOkicks
  • Block BlexBot
  • Block SISTRIX
  • Block Uptime robot
  • Block Ezooms Robot
  • Block Perl LWP
  • Block netEstate NE Crawler (+http://www.website-datenbank.de/)
  • Block WiseGuys Robot
  • Block Turnitin Robot
  • Block Heritrix
  • Block pricepi
  • Block Eniro
  • Block Baidu
  • Block Psbot
  • Block Youdao
  • BLEXBot
  • Block NaverBot
  • Block Psbot
  • Block ZBot
  • Block Vagabondo
  • Block LinkWalker
  • Block Xenu Link Sleuth
  • Block SimplePie
  • Block Wget
  • Block Pixray-Seeker
  • Block BoardReader
  • Block Unknown Bot

Warnings

  • 2 invalid lines.