wingsupply.com
robots.txt

Robots Exclusion Standard data for wingsupply.com

Resource Scan

Scan Details

Site Domain wingsupply.com
Base Domain wingsupply.com
Scan Status Failed
Failure StageFetching resource.
Failure ReasonServer returned a client error.
Last Scan5/11/2025, 6:09:19 PM
Next Scan 8/9/2025, 6:09:19 PM

Last Successful Scan

Scanned9/21/2024, 4:43:52 PM
URL https://wingsupply.com/robots.txt
Redirect https://www.wingsupply.com/robots.txt
Redirect Domain www.wingsupply.com
Redirect Base wingsupply.com
Domain IPs 104.21.93.20, 172.67.203.1, 2606:4700:3032::ac43:cb01, 2606:4700:3037::6815:5d14
Redirect IPs 104.21.93.20, 172.67.203.1, 2606:4700:3032::ac43:cb01, 2606:4700:3037::6815:5d14
Response IP 172.67.203.1
Found Yes
Hash 0880a3e4d53a920f0dcb1b585824f845099a5f46569d925118c807c0534a664d
SimHash 2f9459218371

Groups

baiduspider

Rule Path
Disallow /

yandex

Rule Path
Disallow /

*

Rule Path
Allow /*?p=
Allow /media/
Allow /skin/
Disallow /404/
Disallow /app/
Disallow /cgi-bin/
Disallow /downloader/
Disallow /includes/
Disallow /js/
Disallow /lib/
Disallow /magento/
Disallow /pkginfo/
Disallow /report/
Disallow /stats/
Disallow /var/
Disallow /index.php/
Disallow /catalog/product_compare/
Disallow /catalog/category/view/
Disallow /catalog/product/view/
Disallow /catalogsearch/
Disallow /checkout/
Disallow /control/
Disallow /contacts/
Disallow /customer/
Disallow /customize/
Disallow /newsletter/
Disallow /poll/
Disallow /review/
Disallow /sendfriend/
Disallow /tag/
Disallow /wishlist/
Disallow /productalert/
Disallow /store_id/
Disallow /browse-all-products/
Disallow /seo_sitemap?
Disallow /seo_sitemap/
Disallow /files/
Disallow /cron.php
Disallow /cron.sh
Disallow /error_log
Disallow /install.php
Disallow /LICENSE.html
Disallow /LICENSE.txt
Disallow /LICENSE_AFL.txt
Disallow /STATUS.txt
Disallow /*.php$
Disallow /*?p=*&
Disallow /*?SID=
Disallow /*?uhc_brand=
Disallow /*?limit=

Other Records

Field Value
crawl-delay 30

Comments

  • ****************************************************************************
  • robots.txt
  • : Robots, spiders, and search engines use this file to detmine which
  • content they should *not* crawl while indexing your website.
  • : This system is called "The Robots Exclusion Standard."
  • : It is strongly encouraged to use a robots.txt validator to check
  • for valid syntax before any robots read it!
  • Examples:
  • Instruct all robots to stay out of the admin area.
  • : User-agent: *
  • : Disallow: /admin/
  • Restrict Google and MSN from indexing your images.
  • : User-agent: Googlebot
  • : Disallow: /images/
  • : User-agent: MSNBot
  • : Disallow: /images/
  • : Test to see if git is working
  • ****************************************************************************
  • Biduspider
  • Yandex
  • Crawlers Setup
  • Allowable Index
  • Directories
  • Disallow: /media/
  • Disallow: /skin/
  • Paths (clean URLs)
  • Files
  • Paths (no clean URLs)
  • Disallow: /*.js$
  • Disallow: /*.css$
  • Disallow: /*?