roamright.com
robots.txt

Robots Exclusion Standard data for roamright.com

Resource Scan

Scan Details

Site Domain roamright.com
Base Domain roamright.com
Scan Status Ok
Last Scan2024-10-24T20:25:41+00:00
Next Scan 2024-11-23T20:25:41+00:00

Last Scan

Scanned2024-10-24T20:25:41+00:00
URL https://roamright.com/robots.txt
Redirect https://www.roamright.com/robots.txt
Redirect Domain www.roamright.com
Redirect Base roamright.com
Domain IPs 192.229.173.235
Redirect IPs 192.229.173.235
Response IP 192.229.173.235
Found Yes
Hash 5e659079e892005b44cd0d380a277b957162d18ba495d60683ce87f67c538443
SimHash a8055dc942ff

Groups

go-http-client/1.1

Rule Path
Disallow /

arquivo-web-crawler
arquivo-web-crawler (compatible; heritrix/1.14.3 +http://arquivo.pt/faq-crawling)

Rule Path
Disallow /

vegebot

Rule Path
Disallow /

test certificate info

Rule Path
Disallow /

panscient.com

Rule Path
Disallow /

mj12bot

Rule Path
Disallow /

easouspider
sogou web spider
sogou web spider/4.0(+http://www.sogou.com/docs/help/webmasters.htm

Product Comment
sogou web spider/4.0(+http://www.sogou.com/docs/help/webmasters.htm 07)
Rule Path
Disallow /

baiduspider
baiduspider-image
baiduspider-video
baiduspider-news
baiduspider-favo
baiduspider-ads
baiduspider-cpro
baiduspider+
baiduspider+(+http://www.baidu.com/search/spider.htm)
mozilla/5.0 (compatible; baiduspider/2.0; +http://www.baidu.com/search/spider.html)

Rule Path
Disallow /

icarus6j - (contact: phil@icarus6.com)

Rule Path
Disallow /

icarus6j

Rule Path
Disallow /

voltron

Rule Path
Disallow /

wininet test

Rule Path
Disallow /

googlebot

Rule Path
Allow /css/*.css
Allow /js/*.js
Allow /ScriptResource.axd
Allow /WebResource.axd
Disallow /uploads/
Disallow /workarea/
Disallow /widgets/
Disallow /?sTerm=
Disallow /_utils/
Disallow /components/
Disallow /cob/
Disallow /policycob/
Disallow /errors/
Disallow /logs/
Disallow /Old_App_Code/
Disallow /Old_App_WebReferences/
Disallow /ResourceProviders/
Disallow /XmlFiles/
Disallow /login.aspx
Disallow /signin/
Disallow /register.aspx
Disallow /register/
Disallow /resetpassword.aspx
Disallow /resetpassword/
Disallow /tag/

*

Rule Path
Disallow /css/
Disallow /uploads/
Disallow /workarea/
Disallow /widgets/
Disallow /?sTerm=
Disallow /_utils/
Disallow /components/
Disallow /cob/
Disallow /policycob/
Disallow /errors/
Disallow /logs/
Disallow /Old_App_Code/
Disallow /Old_App_WebReferences/
Disallow /ResourceProviders/
Disallow /XmlFiles/
Disallow /login.aspx
Disallow /signin/
Disallow /register.aspx
Disallow /register/
Disallow /resetpassword.aspx
Disallow /resetpassword/
Disallow /tag/
Disallow /ScriptResource.axd
Disallow /WebResource.axd

Other Records

Field Value
sitemap https://www.roamright.com/sitemap.xml
sitemap https://www.roamright.com/sitemap.xml

Comments

  • User-agent: dotbot
  • User-agent: rogerbot
  • User-agent: rogerbot/1.2
  • User-agent: rogerbot/1.2 (https://moz.com/help/guides/moz-procedures/what-is-rogerbot, rogerbot-crawler+aardwolf-production-crawler-34@moz.com)
  • Disallow: /
  • Baiduspider
  • Google will skip the wildcard entries in full and only take note of what is actually under the user-agent Googlebot. So it's best to try not to use the useragent Googlebot in robots unless you absolutely have to. And if you do then add all the pages/assets that they need to take note of, even if they are duplicated with the wildcard entries.