hvdd.org
robots.txt

Robots Exclusion Standard data for hvdd.org

Resource Scan

Scan Details

Site Domain hvdd.org
Base Domain hvdd.org
Scan Status Failed
Failure StageFetching resource.
Failure ReasonServer returned a server error.
Last Scan2024-10-03T21:00:24+00:00
Next Scan 2024-12-02T21:00:24+00:00

Last Successful Scan

Scanned2024-07-13T20:58:49+00:00
URL https://hvdd.org/robots.txt
Domain IPs 104.21.6.160, 172.67.135.3, 2606:4700:3032::ac43:8703, 2606:4700:3033::6815:6a0
Response IP 104.21.6.160
Found Yes
Hash 0f23fe72e119f7cbf5acee694bba85d79450a74247ae87e29336bef7e9649dcb
SimHash 2c71fd1344c7

Groups

*

Rule Path
Allow /
Disallow /search

googlebot

Rule Path
Disallow

googlebot-image

Rule Path
Disallow

googlebot-mobile

Rule Path
Disallow

msnbot

Rule Path
Disallow

slurp

Rule Path
Disallow

teoma

Rule Path
Disallow

gigabot

Rule Path
Disallow

robozilla

Rule Path
Disallow

nutch

Rule Path
Disallow

ia_archiver

Rule Path
Disallow

baiduspider

Rule Path
Disallow

naverbot

Rule Path
Disallow

yeti

Rule Path
Disallow

yahoo-mmcrawler

Rule Path
Disallow

psbot

Rule Path
Disallow

yahoo-blogs/v3.9

Rule Path
Disallow

*

Rule Path
Disallow
Disallow /cgi-bin/

Other Records

Field Value
sitemap https://hvdd.org/

Comments

  • This robots.txt file controls crawling of URLs under https://example.com.
  • All crawlers are disallowed to crawl files in the "includes" directory, such
  • as .css, .js, but Google needs them for rendering, so Googlebot is allowed
  • to crawl them.