stagedrop.com
robots.txt

Robots Exclusion Standard data for stagedrop.com

Resource Scan

Scan Details

Site Domain stagedrop.com
Base Domain stagedrop.com
Scan Status Failed
Failure StageFetching resource.
Failure ReasonServer returned a client error.
Last Scan2025-07-14T06:02:24+00:00
Next Scan 2025-10-12T06:02:24+00:00

Last Successful Scan

Scanned2025-02-22T04:49:35+00:00
URL https://stagedrop.com/robots.txt
Domain IPs 104.16.39.93, 104.16.40.93, 104.16.41.93, 104.16.42.93, 104.16.43.93, 2606:4700::6810:275d, 2606:4700::6810:285d, 2606:4700::6810:295d, 2606:4700::6810:2a5d, 2606:4700::6810:2b5d
Response IP 104.16.41.93
Found Yes
Hash fba63b8fc640e14045b51dbe4ebff4a11cd14468d1a84fa2dc1d30ab1bf8f32d
SimHash 32c80553cde5

Groups

*

Product Comment
* These rules apply to all crawlers
Rule Path Comment
Disallow /store/adm Crawlers do not need access to your console, so this rule disallows all console pages for crawlers
Disallow /store/shopCa Crawlers cannot add items to a shopping cart, so prevent them from viewing the cart page
Disallow /store/WriteR Crawlers cannot submit product reviews, so there is no need for them to view this page. The content of product reviews is on a separate page that is allowed
Disallow /store/addtoc is used in embedded commerce and allows items to be added to the cart. There is no content on this page and therefore crawlers do not need to see it.
Disallow /store/OnePageCh generally contains no inherent SEO value, and crawlers cannot add items to their cart to be able to access this page, therefore it is disallowed
Disallow /store/checko This page contains no content
Disallow /*Attrib%3D If you use attributes, these rules allow you to avoid duplicate content warnings when customers filter by specific attributes.
Disallow /*?Attrib= If you use attributes, these rules allow you to avoid duplicate content warnings when customers filter by specific attributes.
Disallow *attribs%3D If you use attributes, these rules allow you to avoid duplicate content warnings when customers filter by specific attributes.
Disallow *Attribs%3D If you use attributes, these rules allow you to avoid duplicate content warnings when customers filter by specific attributes.
Disallow /*?sort= -
Disallow /store/Search.aspx -
Disallow /privacy -
Disallow /terms -
Disallow /store/lost -
Disallow /help -
Allow /store/admin/inc/vendor/jquery.js -

gptbot

Rule Path
Disallow /

zoominfobot

Rule Path
Disallow /

nextgensearchbot

Rule Path
Disallow /

zoombot

Rule Path
Disallow *

serpstatbot

Rule Path
Disallow /

the knowledge ai

Rule Path
Disallow /

popscreen

Rule Path
Disallow /

semrushbot

Rule Path
Disallow /

mj12bot

Rule Path
Disallow /

Comments

  • This file allows you to control web crawlers access to specific pages on your site. Web crawlers are
  • programs that search engines run to view and analyze your site to index the content in their search engines.
  • Common crawlers include Googlebot and bingbot. These are the default rules defined for your site and include
  • pages and directories that crawlers do not need access to.