support.shelly.cloud
robots.txt

Robots Exclusion Standard data for support.shelly.cloud

Resource Scan

Scan Details

Site Domain support.shelly.cloud
Base Domain shelly.cloud
Scan Status Ok
Last Scan2025-07-31T16:49:29+00:00
Next Scan 2025-08-30T16:49:29+00:00

Last Scan

Scanned2025-07-31T16:49:29+00:00
URL https://support.shelly.cloud/robots.txt
Domain IPs 162.159.140.147, 172.66.0.145
Response IP 162.159.140.147
Found Yes
Hash 29997605d59a3c729938beb13d19fca6eb2e75f183ab00c81c0839bb4452e215
SimHash 268145bd6568

Groups

*

Rule Path
Disallow /support/search
Disallow /support/tickets/
Disallow /support/login
Disallow /support/login-verification
Disallow /en/support/search
Disallow /en/support/tickets/
Disallow /en/support/login
Disallow /en/support/login-verification
Disallow /fr/support/search
Disallow /fr/support/tickets/
Disallow /fr/support/login
Disallow /fr/support/login-verification
Disallow /de/support/search
Disallow /de/support/tickets/
Disallow /de/support/login
Disallow /de/support/login-verification
Disallow /it/support/search
Disallow /it/support/tickets/
Disallow /it/support/login
Disallow /it/support/login-verification
Disallow /pl/support/search
Disallow /pl/support/tickets/
Disallow /pl/support/login
Disallow /pl/support/login-verification
Disallow /pt-PT/support/search
Disallow /pt-PT/support/tickets/
Disallow /pt-PT/support/login
Disallow /pt-PT/support/login-verification
Disallow /login/normal/
Allow /helpdesk/attachments
Disallow /helpdesk/
Disallow /public/tickets/
Disallow /*/hit$

Other Records

Field Value
sitemap https://support.shelly.cloud/support/sitemap.xml

Comments

  • See http://www.robotstxt.org/wc/norobots.html for documentation on how to use the robots.txt file
  • To ban all spiders from the entire site uncomment the next two lines:
  • User-Agent: *
  • Disallow: /