rattlermidstream.com
robots.txt

Robots Exclusion Standard data for rattlermidstream.com

Resource Scan

Scan Details

Site Domain rattlermidstream.com
Base Domain rattlermidstream.com
Scan Status Failed
Failure StageFetching resource.
Failure ReasonCouldn't connect to server.
Last Scan2024-03-22T06:45:56+00:00
Next Scan 2024-06-20T06:45:56+00:00

Last Successful Scan

Scanned2022-07-29T06:22:28+00:00
URL https://www.rattlermidstream.com/robots.txt
Response IP 23.59.168.105
Found Yes
Hash 5bdfef0a6b41788779e89b2030377949916c95691e5c3c7262458553414eb297
SimHash 3896bd0bc760

Groups

*

Rule Path
Allow /core/*.css$
Allow /core/*.css?
Allow /core/*.js$
Allow /core/*.js?
Allow /core/*.gif
Allow /core/*.jpg
Allow /core/*.jpeg
Allow /core/*.png
Allow /core/*.svg
Allow /profiles/*.css$
Allow /profiles/*.css?
Allow /profiles/*.js$
Allow /profiles/*.js?
Allow /profiles/*.gif
Allow /profiles/*.jpg
Allow /profiles/*.jpeg
Allow /profiles/*.png
Allow /profiles/*.svg
Disallow /core/
Disallow /profiles/
Disallow /README.txt
Disallow /web.config
Disallow /admin/
Disallow /comment/reply/
Disallow /filter/tips/
Disallow /search/
Disallow /user/register
Disallow /user/password
Disallow /user/login
Disallow /user/logout
Disallow /index.php/admin/
Disallow /index.php/comment/reply/
Disallow /index.php/filter/tips/
Disallow /index.php/search/
Disallow /index.php/user/password
Disallow /index.php/user/register
Disallow /index.php/user/login
Disallow /index.php/user/logout

Comments

  • robots.txt
  • This file is to prevent the crawling and indexing of certain parts
  • of your site by web crawlers and spiders run by sites like Yahoo!
  • and Google. By telling these "robots" where not to go on your site,
  • you save bandwidth and server resources.
  • This file will be ignored unless it is at the root of your host:
  • Used: http://example.com/robots.txt
  • Ignored: http://example.com/site/robots.txt
  • For more information about the robots.txt standard, see:
  • http://www.robotstxt.org/robotstxt.html
  • CSS, JS, Images
  • Directories
  • Files
  • Paths (clean URLs)
  • Paths (no clean URLs)