badkamerdepot.be
robots.txt

Robots Exclusion Standard data for badkamerdepot.be

Resource Scan

Scan Details

Site Domain badkamerdepot.be
Base Domain badkamerdepot.be
Scan Status Ok
Last Scan2025-03-13T06:05:41+00:00
Next Scan 2025-04-12T06:05:41+00:00

Last Scan

Scanned2025-03-13T06:05:41+00:00
URL https://www.badkamerdepot.be/robots.txt
Domain IPs 104.26.0.151, 104.26.1.151, 172.67.69.61, 2606:4700:20::681a:197, 2606:4700:20::681a:97, 2606:4700:20::ac43:453d
Response IP 104.26.1.151
Found Yes
Hash cf2dc8746aafa759574961f154ad18d0ac596a3794349cbd472c5c57104448d8
SimHash bc1cb959c7f1

Groups

*

Rule Path
Allow /*?p=
Allow /index.php/blog/
Allow /catalog/seo_sitemap/category/
Disallow /catalogsearch/result/
Disallow /404/
Disallow /app/
Disallow /cgi-bin/
Disallow /downloader/
Disallow /includes/
Disallow /lib/
Disallow /magento/
Disallow /pkginfo/
Disallow /report/
Disallow /shell/
Disallow /stats/
Disallow /var/
Disallow /index.php/
Disallow /catalog/product_compare/
Disallow /catalog/category/view/
Disallow /catalog/product/view/
Disallow /catalogsearch/
Disallow /checkout/
Disallow /control/
Disallow /contacts/
Disallow /customer/
Disallow /customize/
Disallow /newsletter/
Disallow /poll/
Disallow /review/
Disallow /sendfriend/
Disallow /tag/
Disallow /wishlist/
Disallow /cron.php
Disallow /cron.sh
Disallow /error_log
Disallow /install.php
Disallow /LICENSE.html
Disallow /LICENSE.txt
Disallow /LICENSE_AFL.txt
Disallow /STATUS.txt
Disallow /*?*
Allow /*?*sel=

Other Records

Field Value
crawl-delay 5

Other Records

Field Value
sitemap https://www.badkamerdepot.be/sitemap.xml

Comments

  • $Id: robots.txt,v magento-specific 2010/28/01 18:24:19 goba Exp $
  • robots.txt
  • This file is to prevent the crawling and indexing of certain parts
  • of your site by web crawlers and spiders run by sites like Yahoo!
  • and Google. By telling these "robots" where not to go on your site,
  • you save bandwidth and server resources.
  • This file will be ignored unless it is at the root of your host:
  • Used: http://example.com/robots.txt
  • Ignored: http://example.com/site/robots.txt
  • For more information about the robots.txt standard, see:
  • http://www.robotstxt.org/wc/robots.html
  • For syntax checking, see:
  • http://www.sxw.org.uk/computing/robots/check.html
  • Website Sitemap
  • Fix this in webmastertools
  • Crawlers Setup
  • Allowable Index
  • Directories
  • Disallow: /js/
  • Disallow: /media/
  • Disallow: /skin/
  • Paths (clean URLs)
  • Files
  • Paths (no clean URLs)
  • Allow preselect