alexei-led.github.io
robots.txt

Robots Exclusion Standard data for alexei-led.github.io

Resource Scan

Scan Details

Site Domain alexei-led.github.io
Base Domain alexei-led.github.io
Scan Status Ok
Last Scan2024-08-29T16:25:57+00:00
Next Scan 2024-09-28T16:25:57+00:00

Last Scan

Scanned2024-08-29T16:25:57+00:00
URL https://alexei-led.github.io/robots.txt
Domain IPs 185.199.108.153, 185.199.109.153, 185.199.110.153, 185.199.111.153, 2606:50c0:8000::153, 2606:50c0:8001::153, 2606:50c0:8002::153, 2606:50c0:8003::153
Response IP 185.199.110.153
Found Yes
Hash 09e32682411c79bf2840e91f7330f6faac558b9187c1a684a4d88594a839b0a2
SimHash b8129d0bc764

Groups

*

Rule Path
Disallow

Other Records

Field Value
crawl-delay 10

Comments

  • robots.txt
  • This file is to prevent the crawling and indexing of certain parts
  • of your site by web crawlers and spiders run by sites like Yahoo!
  • and Google. By telling these "robots" where not to go on your site,
  • you save bandwidth and server resources.
  • This file will be ignored unless it is at the root of your host:
  • Used: http://example.com/robots.txt
  • Ignored: http://example.com/site/robots.txt
  • For more information about the robots.txt standard, see:
  • http://www.robotstxt.org/robotstxt.html