lowlaundry.com
robots.txt

Robots Exclusion Standard data for lowlaundry.com

Resource Scan

Scan Details

Site Domain lowlaundry.com
Base Domain lowlaundry.com
Scan Status Ok
Last Scan2026-01-03T03:39:35+00:00
Next Scan 2026-02-02T03:39:35+00:00

Last Scan

Scanned2026-01-03T03:39:35+00:00
URL https://lowlaundry.com/robots.txt
Domain IPs 172.66.40.211, 172.66.43.45, 2606:4700:3108::ac42:28d3, 2606:4700:3108::ac42:2b2d
Response IP 172.66.43.45
Found Yes
Hash bcae838262560682844b262580cb71d0281328a24d7c92ac87a55dae2986ed02
SimHash bd34fcd5c7f1

Groups

googlebot-image

Rule Path
Disallow

*

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 10

*

Rule Path
Disallow /404/
Disallow /app/
Disallow /cgi-bin/
Disallow /downloader/
Disallow /errors/
Disallow /includes/
Disallow /lib/
Disallow /magento/
Disallow /pkginfo/
Disallow /report/
Disallow /scripts/
Disallow /shell/
Disallow /stats/
Disallow /var/
Disallow /index.php/
Disallow /catalog/product_compare/
Disallow /catalog/category/view/
Disallow /catalogsearch/
Disallow /checkout/
Disallow /onepage/
Disallow /control/
Disallow /contacts/
Disallow /customer/
Disallow /customize/
Disallow /newsletter/
Disallow /poll/
Disallow /review/
Disallow /sendfriend/
Disallow /tag/
Disallow /wishlist/
Disallow /catalog/product/gallery/
Disallow /cron.php
Disallow /cron.sh
Disallow /error_log
Disallow /install.php
Disallow /LICENSE.html
Disallow /LICENSE.txt
Disallow /LICENSE_AFL.txt
Disallow /STATUS.txt
Disallow /*.php$
Disallow /*?SID=

Comments

  • Google Image Crawler Setup
  • Try to slow down the stress on the server from crawlers
  • Crawlers Setup
  • Directories
  • Disallow: /js/
  • Disallow: /media/
  • Disallow: /skin/
  • Paths (clean URLs)
  • Files
  • Paths (no clean URLs)
  • Disallow: /*.js$
  • Disallow: /*.css$