sundazed.com
robots.txt

Robots Exclusion Standard data for sundazed.com

Resource Scan

Scan Details

Site Domain sundazed.com
Base Domain sundazed.com
Scan Status Ok
Last Scan2024-09-09T21:56:31+00:00
Next Scan 2024-10-09T21:56:31+00:00

Last Scan

Scanned2024-09-09T21:56:31+00:00
URL https://sundazed.com/robots.txt
Domain IPs 104.16.39.93, 104.16.40.93, 104.16.41.93, 104.16.42.93, 104.16.43.93, 2606:4700::6810:275d, 2606:4700::6810:285d, 2606:4700::6810:295d, 2606:4700::6810:2a5d, 2606:4700::6810:2b5d
Response IP 104.16.42.93
Found Yes
Hash 2f4a9bb0fe82ff26f522dd30782a5785afd51b631d43f382caccc734c03d4aec
SimHash 3ac01513ccf5

Groups

*

Product Comment
* These rules apply to all crawlers
Rule Path Comment
Disallow /store/adm Crawlers do not need access to your console, so this rule disallows all console pages for crawlers
Disallow /store/shopCa Crawlers cannot add items to a shopping cart, so prevent them from viewing the cart page
Disallow /store/WriteR Crawlers cannot submit product reviews, so there is no need for them to view this page. The content of product reviews is on a separate page that is allowed
Disallow /store/addtoc is used in embedded commerce and allows items to be added to the cart. There is no content on this page and therefore crawlers do not need to see it.
Disallow /store/OnePageCh generally contains no inherent SEO value, and crawlers cannot add items to their cart to be able to access this page, therefore it is disallowed
Disallow /store/checko This page contains no content
Disallow /*Attrib%3D If you use attributes, these rules allow you to avoid duplicate content warnings when customers filter by specific attributes.
Disallow /*?Attrib= If you use attributes, these rules allow you to avoid duplicate content warnings when customers filter by specific attributes.
Disallow *attribs%3D If you use attributes, these rules allow you to avoid duplicate content warnings when customers filter by specific attributes.
Disallow *Attribs%3D If you use attributes, these rules allow you to avoid duplicate content warnings when customers filter by specific attributes.

Other Records

Field Value
crawl-delay 20

seznambot

Rule Path
Disallow /

*

Rule Path
Disallow /

Other Records

Field Value
crawl-delay 600

googlebot
googlebot-image
mediapartners-google
msnbot
msnbot-media
slurp
yahoo-blogs
yahoo-mmcrawler

Rule Path
Disallow /includes/
Disallow /misc/
Disallow /modules/
Disallow /profiles/
Disallow /scripts/
Disallow /sites/
Disallow /themes/
Disallow /CHANGELOG.txt
Disallow /cron.php
Disallow /INSTALL.mysql.txt
Disallow /INSTALL.pgsql.txt
Disallow /install.php
Disallow /INSTALL.txt
Disallow /LICENSE.txt
Disallow /MAINTAINERS.txt
Disallow /update.php
Disallow /UPGRADE.txt
Disallow /xmlrpc.php
Disallow /admin/
Disallow /comment/reply/
Disallow /contact/
Disallow /logout/
Disallow /node/add/
Disallow /search/
Disallow /opensearch/
Disallow /user/register/
Disallow /user/password/
Disallow /user/login/
Disallow /?q=admin%2F
Disallow /?q=comment%2Freply%2F
Disallow /?q=contact%2F
Disallow /?q=logout%2F
Disallow /?q=node%2Fadd%2F
Disallow /?q=search%2F
Disallow /?q=user%2Fpassword%2F
Disallow /?q=user%2Fregister%2F
Disallow /?q=user%2Flogin%2F

Other Records

Field Value
crawl-delay 600

Comments

  • This file allows you to control web crawlers access to specific pages on your site. Web crawlers are
  • programs that search engines run to view and analyze your site to index the content in their search engines.
  • Common crawlers include Googlebot and bingbot. These are the default rules defined for your site and include
  • pages and directories that crawlers do not need access to.
  • disallow all
  • but allow only important bots
  • Directories
  • Files
  • Paths (clean URLs)
  • Paths (no clean URLs)