paulist.org
robots.txt

Robots Exclusion Standard data for paulist.org

Resource Scan

Scan Details

Site Domain paulist.org
Base Domain paulist.org
Scan Status Ok
Last Scan2025-08-28T02:12:38+00:00
Next Scan 2025-09-27T02:12:38+00:00

Last Scan

Scanned2025-08-28T02:12:38+00:00
URL https://paulist.org/robots.txt
Domain IPs 104.21.28.229, 172.67.147.195, 2606:4700:3030::ac43:93c3, 2606:4700:3032::6815:1ce5
Response IP 104.21.28.229
Found Yes
Hash 8825a5f2ef8b91f14c56e9ae7cf02639f3a1462b6e3b111df71771f0eb7cb87b
SimHash 31b41d216555

Groups

ahrefsbot

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 5

Comments

  • ****************************************************************************
  • robots.txt
  • : Robots, spiders, and search engines use this file to detmine which
  • content they should *not* crawl while indexing your website.
  • : This system is called "The Robots Exclusion Standard."
  • : It is strongly encouraged to use a robots.txt validator to check
  • for valid syntax before any robots read it!
  • Examples:
  • Instruct all robots to stay out of the admin area.
  • : User-agent: *
  • : Disallow: /admin/
  • Restrict Google and MSN from indexing your images.
  • : User-agent: Googlebot
  • : Disallow: /images/
  • : User-agent: MSNBot
  • : Disallow: /images/
  • Ticket: 21639414