cooktimepro.com
robots.txt

Robots Exclusion Standard data for cooktimepro.com

Resource Scan

Scan Details

Site Domain cooktimepro.com
Base Domain cooktimepro.com
Scan Status Ok
Last Scan2025-11-01T10:52:42+00:00
Next Scan 2025-11-08T10:52:42+00:00

Last Scan

Scanned2025-11-01T10:52:42+00:00
URL https://cooktimepro.com/robots.txt
Domain IPs 104.21.56.203, 172.67.187.220, 2606:4700:3033::6815:38cb, 2606:4700:3034::ac43:bbdc
Response IP 104.21.56.203
Found Yes
Hash c6f6823ab1eefed492d48eca4cf93b2daad69152ad5b9e6aabb31d3bb30a332f
SimHash 42ef0a7064d0

Groups

*

Rule Path
Allow /
Disallow /admin/
Disallow /api/
Disallow /_astro/
Disallow /dist/
Disallow /.astro/
Disallow /node_modules/
Disallow /search?*
Disallow /*?q=*
Disallow /*?search=*
Disallow /dev/
Disallow /test/
Disallow /examples/
Allow /sitemap-index.xml
Allow /sitemap-0.xml
Allow /sitemap*.xml
Allow /robots.txt
Allow /manifest.json
Allow /favicon.ico

googlebot

Rule Path
Allow /

Other Records

Field Value
crawl-delay 1

bingbot

Rule Path
Allow /

Other Records

Field Value
crawl-delay 1

yandexbot

Rule Path
Allow /

Other Records

Field Value
crawl-delay 2

ahrefsbot

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 10

semrushbot

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 10

mj12bot

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 10

facebookexternalhit/1.1

Rule Path
Allow /

twitterbot

Rule Path
Allow /

linkedinbot

Rule Path
Allow /

whatsapp

Rule Path
Allow /

*

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 1

Other Records

Field Value
sitemap https://cooktimepro.com/sitemap-index.xml

Comments

  • Robots.txt for CookTimePro - Cooking Times & Techniques Guide
  • Generated automatically during build process
  • Last updated: 2025-11-01
  • Default rules for all search engines
  • Disallow admin, API, and build assets
  • Disallow search result pages to prevent duplicate content
  • Disallow development and testing paths
  • Allow important SEO files explicitly
  • Google-specific optimizations
  • Bing-specific optimizations
  • Yandex-specific rules
  • Block aggressive crawlers that might impact site performance
  • Allow social media crawlers for sharing
  • HTTPS enforcement directives
  • Force HTTPS for all crawler requests to improve SEO ranking
  • Sitemap locations (HTTPS enforced)
  • Host directive (canonical HTTPS domain)
  • General crawl delay (in seconds)

Warnings

  • `host` is not a known field.
  • `request-rate` is not a known field.
  • `visit-time` is not a known field.