infragard.org
robots.txt

Robots Exclusion Standard data for infragard.org

Resource Scan

Scan Details

Site Domain infragard.org
Base Domain infragard.org
Scan Status Failed
Failure StageFetching resource.
Failure ReasonServer returned a client error.
Last Scan2025-08-28T14:55:32+00:00
Next Scan 2025-08-29T14:55:32+00:00

Last Successful Scan

Scanned2025-07-29T14:55:07+00:00
URL https://infragard.org/robots.txt
Domain IPs 104.18.20.71, 104.18.21.71, 2606:4700::6812:1447, 2606:4700::6812:1547
Response IP 104.18.20.71
Found Yes
Hash 1544eeb10343a6dd53506fe6ec7b26521af675331eaffd891230ca2606415714
SimHash ad51ab554d65

Groups

*

Rule Path
Disallow

googlebot

Rule Path
Disallow /*?
Disallow /*atct_album_view$
Disallow /*folder_factories$
Disallow /*folder_summary_view$
Disallow /*login_form$
Disallow /*mail_password_form$
Disallow /%40%40search
Disallow /*search_rss$
Disallow /*sendto_form$
Disallow /*summary_view$
Disallow /*thumbnail_view$
Disallow /*view$

Other Records

Field Value
sitemap https://www.infragard.org/sitemap.xml.gz

Comments

  • Define access-restrictions for robots/spiders
  • http://www.robotstxt.org/wc/norobots.html
  • By default we allow robots to access all areas of our site
  • already accessible to anonymous users
  • Add Googlebot-specific syntax extension to exclude forms
  • that are repeated for each piece of content in the site
  • the wildcard is only supported by Googlebot
  • http://www.google.com/support/webmasters/bin/answer.py?answer=40367&ctx=sibling