hughhewitt.com
robots.txt

Robots Exclusion Standard data for hughhewitt.com

Resource Scan

Scan Details

Site Domain hughhewitt.com
Base Domain hughhewitt.com
Scan Status Ok
Last Scan2024-05-20T23:38:15+00:00
Next Scan 2024-05-27T23:38:15+00:00

Last Scan

Scanned2024-05-20T23:38:15+00:00
URL https://hughhewitt.com/robots.txt
Domain IPs 209.126.24.109
Response IP 209.126.24.109
Found Yes
Hash b6f04e245eb28940fc452c969860c858ec16e8edd6f2c6b4c5c45c7f472d3060
SimHash 38b05d21a555

Groups

*

Rule Path
Disallow

gptbot

Rule Path
Disallow /

chatgpt-user

Rule Path
Disallow /

google-extended

Rule Path
Disallow /

ccbot

Rule Path
Disallow /

perplexitybot

Rule Path
Disallow /

Comments

  • ****************************************************************************
  • robots.txt
  • : Robots, spiders, and search engines use this file to detmine which
  • content they should *not* crawl while indexing your website.
  • : This system is called "The Robots Exclusion Standard."
  • : It is strongly encouraged to use a robots.txt validator to check
  • for valid syntax before any robots read it!
  • Examples:
  • Instruct all robots to stay out of the admin area.
  • : User-agent: *
  • : Disallow: /admin/
  • Restrict Google and MSN from indexing your images.
  • : User-agent: Googlebot
  • : Disallow: /images/
  • : User-agent: MSNBot
  • : Disallow: /images/
  • ****************************************************************************
  • AI Bots
  • OpenAI bots
  • Google AI bots - Bard, Gemini and VertexAI
  • commoncrawl AI
  • Perplexity AI
  • End AI Bots