switrus.com
robots.txt

Robots Exclusion Standard data for switrus.com

Resource Scan

Scan Details

Site Domain switrus.com
Base Domain switrus.com
Scan Status Ok
Last Scan2026-03-12T11:53:18+00:00
Next Scan 2026-04-11T11:53:18+00:00

Last Scan

Scanned2026-03-12T11:53:18+00:00
URL https://switrus.com/robots.txt
Domain IPs 104.21.65.145, 172.67.164.25, 2606:4700:3032::6815:4191, 2606:4700:3033::ac43:a419
Response IP 172.67.164.25
Found Yes
Hash 69967881f03b0bc5f52ebd8a294ddbf50880e6ebebd74b929d1814986adf70c3
SimHash 38943d31a555

Groups

oai-searchbot

Rule Path
Allow /

chatgpt-user

Rule Path
Allow /

gptbot

Rule Path
Allow /

googlebot

Rule Path
Allow /

googlebot-image

Rule Path
Allow /

*

Rule Path
Disallow /*.CVS
Disallow /*.Zip$
Disallow /*.Svn$
Disallow /*.Idea$
Disallow /*.Sql$
Disallow /*.Tgz$
Disallow /search/*
Disallow /search-page?search=
Disallow /shared-page/

Other Records

Field Value
sitemap https://switrus.com/sitemap.xml

Comments

  • ****************************************************************************
  • robots.txt
  • : Robots, spiders, and search engines use this file to detmine which
  • content they should *not* crawl while indexing your website.
  • : This system is called "The Robots Exclusion Standard."
  • : It is strongly encouraged to use a robots.txt validator to check
  • for valid syntax before any robots read it!
  • Examples:
  • Instruct all robots to stay out of the admin area.
  • : User-agent: *
  • : Disallow: /admin/
  • Restrict Google and MSN from indexing your images.
  • : User-agent: Googlebot
  • : Disallow: /images/
  • : User-agent: MSNBot
  • : Disallow: /images/
  • ****************************************************************************
  • --- OpenAI / ChatGPT Crawlers (Allowed for AI search visibility) ---
  • CVS, SVN directory and dump files
  • Search url