feelcats.com
robots.txt

Robots Exclusion Standard data for feelcats.com

Resource Scan

Scan Details

Site Domain feelcats.com
Base Domain feelcats.com
Scan Status Failed
Failure StageFetching resource.
Failure ReasonServer returned a client error.
Last Scan2024-10-26T06:45:40+00:00
Next Scan 2024-11-25T06:45:40+00:00

Last Successful Scan

Scanned2024-09-27T06:42:05+00:00
URL https://feelcats.com/robots.txt
Domain IPs 34.149.120.3, 34.149.36.179, 34.160.81.203, 35.244.153.44
Response IP 34.120.190.48
Found Yes
Hash 9a2205dfb34e2fbf80eb97245757154c83c5710a02405ff79f942e1de54afe2b
SimHash e1dc7a19cce4

Groups

*

Rule Path
Disallow /*.CVS
Disallow /*.svn
Disallow /*.idea
Disallow /*.sql
Disallow /*.tgz
Disallow /cgi-bin/
Disallow /cleanup.php
Disallow /apc.php
Disallow /memcache.php
Disallow /phpinfo.php
Disallow /*?SID=

*

Rule Path
Allow /wp-content/uploads/*
Allow /wp-content/*.js
Allow /wp-content/*.css
Allow /wp-includes/*.js
Allow /wp-includes/*.css
Disallow /cgi-bin
Disallow /wp-content/plugins/
Disallow /wp-content/themes/
Disallow /wp-includes/
Disallow /*/attachment/
Disallow /tag/*/page/
Disallow /tag/*/feed/
Disallow /page/
Disallow /comments/
Disallow /xmlrpc.php
Disallow /?attachment_id*
Disallow /?preview_id*

*

Rule Path
Disallow /?s=
Disallow /search

*

Rule Path
Disallow /trackback
Disallow /*trackback
Disallow /*trackback*
Disallow /*/trackback

*

Rule Path
Allow /feed/$
Disallow /feed/
Disallow /comments/feed/
Disallow /*/feed/$
Disallow /*/feed/rss/$
Disallow /*/trackback/$
Disallow /*/*/feed/$
Disallow /*/*/feed/rss/$
Disallow /*/*/trackback/$
Disallow /*/*/*/feed/$
Disallow /*/*/*/feed/rss/$
Disallow /*/*/*/trackback/$

noxtrumbot

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 20

msnbot

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 20

slurp

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 20

msiecrawler

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

libwww

Rule Path
Disallow /

orthogaffe

Rule Path
Disallow /

ubicrawler

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

wget

Rule Path
Disallow /

grub-client

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

npbot

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

googlebot

Rule Path
Allow /*.css$
Allow /*.js$
Disallow /m/
Disallow /mobile/

Other Records

Field Value
sitemap https://www.feelcats.com/sitemap.xml

Comments

  • Para el dominio www.feelcats.com
  • Enable robots.txt rules for all crawlers
  • Crawl-delay parameter: number of seconds to wait between successive requests to the same server.
  • Set a custom crawl rate if you're experiencing traffic problems with your server.
  • Crawl-delay: 10
  • Magento sitemap: uncomment and replace the URL to your Magento sitemap file. METO LOS DEL BLOG TB
  • DEVELOPMENT RELATED SETTINGS
  • Do not crawl development files and folders: CVS, svn directories and dump files
  • SERVER SETTINGS
  • Do not crawl common server technical folders and files
  • Do not crawl 2-nd home page copy (example.com/index.php/). Uncomment it only if you activated Magento SEO URLs.
  • Disallow: /index.php/
  • Do not crawl links with session IDs
  • WORDPRESS SEO IMPROVEMENTS 14/3/2019
  • robots de Raiola Networks
  • es necesario personalizar algunas opciones o puede dar problemas
  • Bloqueo basico para todos los bots y crawlers
  • puede dar problemas por bloqueo de recursos en GWT
  • Bloqueo de las URL dinamicas
  • Disallow: /*?
  • Bloqueo de busquedas
  • Bloqueo de trackbacks
  • Bloqueo de feeds para crawlers
  • Ralentizamos algunos bots que se suelen volver locos
  • Bloqueo de bots y crawlers poco utiles
  • Previene problemas de recursos bloqueados en Google Webmaster Tools
  • Errores de crawling en search console: 404 en direcciones que nunca existienron