erake.freshdesk.com
robots.txt

Robots Exclusion Standard data for erake.freshdesk.com

Resource Scan

Scan Details

Site Domain erake.freshdesk.com
Base Domain freshdesk.com
Scan Status Ok
Last Scan2025-07-17T02:32:51+00:00
Next Scan 2025-08-16T02:32:51+00:00

Last Scan

Scanned2025-07-17T02:32:51+00:00
URL https://erake.freshdesk.com/robots.txt
Domain IPs 162.159.140.147, 172.66.0.145
Response IP 172.66.0.145
Found Yes
Hash 2b641739ca161ac7c13b2a410cbc97013d7f58dc12dac5dd281c0990d18d37f6
SimHash 66453fee34a1

Groups

*

Rule Path
Disallow /support/search
Disallow /support/tickets/
Disallow /support/login
Disallow /support/login-verification
Disallow /ja-JP/support/search
Disallow /ja-JP/support/tickets/
Disallow /ja-JP/support/login
Disallow /ja-JP/support/login-verification
Disallow /zh-CN/support/search
Disallow /zh-CN/support/tickets/
Disallow /zh-CN/support/login
Disallow /zh-CN/support/login-verification
Disallow /zh-TW/support/search
Disallow /zh-TW/support/tickets/
Disallow /zh-TW/support/login
Disallow /zh-TW/support/login-verification
Disallow /en/support/search
Disallow /en/support/tickets/
Disallow /en/support/login
Disallow /en/support/login-verification
Disallow /ko/support/search
Disallow /ko/support/tickets/
Disallow /ko/support/login
Disallow /ko/support/login-verification
Disallow /login/normal/
Allow /helpdesk/attachments
Disallow /helpdesk/
Disallow /public/tickets/
Disallow /*/hit$

Other Records

Field Value
sitemap https://erake.freshdesk.com/support/sitemap.xml

Comments

  • See http://www.robotstxt.org/wc/norobots.html for documentation on how to use the robots.txt file
  • To ban all spiders from the entire site uncomment the next two lines:
  • User-Agent: *
  • Disallow: /