cubo.network
robots.txt

Robots Exclusion Standard data for cubo.network

Resource Scan

Scan Details

Site Domain cubo.network
Base Domain cubo.network
Scan Status Ok
Last Scan2024-08-29T13:37:33+00:00
Next Scan 2024-09-28T13:37:33+00:00

Last Scan

Scanned2024-08-29T13:37:33+00:00
URL https://cubo.network/robots.txt
Domain IPs 54.192.18.115, 54.192.18.33, 54.192.18.92, 54.192.18.93
Response IP 108.156.133.85
Found Yes
Hash 2d150efcd217ee8a034d759d8b4fc6b5896ffc4bbf7169fadd155cbf6c401a9e
SimHash bc109d5ac764

Groups

*

Rule Path
Allow /assets/**/*.css$
Allow /assets/**/*.css?
Allow /assets/**/*.js$
Allow /assets/**/*.js?
Allow /assets/**/*.gif
Allow /assets/**/*.jpg
Allow /assets/**/*.jpeg
Allow /assets/**/*.png
Allow /assets/**/*.svg
Disallow /**/*.md
Disallow /go/

Other Records

Field Value
crawl-delay 10

Comments

  • robots.txt
  • This file is to prevent the crawling and indexing of certain parts
  • of your site by web crawlers and spiders run by sites like Yahoo!
  • and Google. By telling these "robots" where not to go on your site,
  • you save bandwidth and server resources.
  • This file will be ignored unless it is at the root of your host:
  • Used: http://example.com/robots.txt
  • Ignored: http://example.com/site/robots.txt
  • For more information about the robots.txt standard, see:
  • http://www.robotstxt.org/wc/robots.html
  • For syntax checking, see:
  • http://www.sxw.org.uk/computing/robots/check.html
  • CSS, JS, Images
  • Files
  • Paths (clean URLs)