chocogram.com.au
robots.txt

Robots Exclusion Standard data for chocogram.com.au

Resource Scan

Scan Details

Site Domain chocogram.com.au
Base Domain chocogram.com.au
Scan Status Ok
Last Scan2025-09-22T23:06:01+00:00
Next Scan 2025-10-22T23:06:01+00:00

Last Scan

Scanned2025-09-22T23:06:01+00:00
URL https://chocogram.com.au/robots.txt
Domain IPs 104.26.10.115, 104.26.11.115, 172.67.72.253, 2606:4700:20::681a:a73, 2606:4700:20::681a:b73, 2606:4700:20::ac43:48fd
Response IP 104.26.11.115
Found Yes
Hash 0690032c2f9f4ab0edda379d515c76446a20ab9697befed5640b3f6b33cbe8c6
SimHash bd347d23e2dc

Groups

amazonbot

Rule Path
Disallow /

yandexbot

Rule Path
Disallow /

ahrefsbot

Rule Path
Disallow /

petalbot

Rule Path
Disallow /

awariobot

Rule Path
Disallow /

semrushbot

Rule Path
Disallow /

googlebot

Rule Path
Disallow

googlebot-image

Rule Path
Disallow

*

Rule Path
Disallow /app/
Disallow /bin/
Disallow /dev/
Disallow /lib/
Disallow /phpserver/
Disallow /pkginfo/
Disallow /report/
Disallow /setup/
Disallow /update/
Disallow /var/
Disallow /vendor/
Disallow /index.php/
Disallow /catalog/product_compare/
Disallow /catalog/category/view/
Disallow /catalog/product/view/
Disallow /catalogsearch/
Disallow /checkout/
Disallow /control/
Disallow /contacts/
Disallow /customer/
Disallow /customize/
Disallow /newsletter/
Disallow /review/
Disallow /sendfriend/
Disallow /wishlist/
Disallow /composer.json
Disallow /composer.lock
Disallow /CONTRIBUTING.md
Disallow /CONTRIBUTOR_LICENSE_AGREEMENT.html
Disallow /COPYING.txt
Disallow /Gruntfile.js
Disallow /LICENSE.txt
Disallow /LICENSE_AFL.txt
Disallow /nginx.conf.sample
Disallow /package.json
Disallow /php.ini.sample
Disallow /RELEASE_NOTES.txt
Disallow /*?*product_list_mode=
Disallow /*?*product_list_order=
Disallow /*?*product_list_limit=
Disallow /*?*product_list_dir=
Disallow /*?SID=
Disallow /*?
Disallow /*.php$
Disallow /*.CVS
Disallow /*.Zip$
Disallow /*.Svn$
Disallow /*.Idea$
Disallow /*.Sql$
Disallow /*.Tgz$

Other Records

Field Value
crawl-delay 10

Comments

  • ****************************************************************************
  • robots.txt
  • : Robots, spiders, and search engines use this file to detmine which
  • content they should *not* crawl while indexing your website.
  • : This system is called "The Robots Exclusion Standard."
  • : It is strongly encouraged to use a robots.txt validator to check
  • for valid syntax before any robots read it!
  • Examples:
  • Instruct all robots to stay out of the admin area.
  • : User-agent: *
  • : Disallow: /admin/
  • Restrict Google and MSN from indexing your images.
  • : User-agent: Googlebot
  • : Disallow: /images/
  • : User-agent: MSNBot
  • : Disallow: /images/
  • ****************************************************************************
  • Directories
  • Paths (clean URLs)
  • Files
  • Do not index pages that are sorted or filtered.
  • Do not index session ID
  • CVS, SVN directory and dump files