wiki.gnome.org
robots.txt

Robots Exclusion Standard data for wiki.gnome.org

Resource Scan

Scan Details

Site Domain wiki.gnome.org
Base Domain gnome.org
Scan Status Ok
Last Scan2025-07-29T15:30:37+00:00
Next Scan 2025-08-28T15:30:37+00:00

Last Scan

Scanned2025-07-29T15:30:37+00:00
URL https://wiki.gnome.org/robots.txt
Domain IPs 207.211.208.183, 2a02:6ea0:d100::32, 2a02:6ea0:d100::33, 2a02:6ea0:d100::34, 2a02:6ea0:d100::35, 2a02:6ea0:d100::36, 2a02:6ea0:d100::49, 2a02:6ea0:d100::50, 2a02:6ea0:d100::52, 79.127.170.194, 79.127.213.245, 79.127.235.3, 79.127.235.59, 89.187.162.13, 89.187.162.155, 89.187.163.18
Response IP 79.127.235.52
Found Yes
Hash 2fdc893936595906aa6090f59c015dd563cdf88c09296af7502b91dd46d50de4
SimHash 2c7a5f024f35

Groups

dotbot

Rule Path
Disallow /

*

Rule Path
Disallow /action/

Other Records

Field Value
crawl-delay 20

Comments

  • if you want to add own robot rules, do it BEFORE the final rule matching *
  • This has to match script url + cfg.url_prefix_action - it
  • saves lots of search engine load and traffic by disallowing crawlers
  • to request action related URLs.
  • NOTE - in order to make this have any effect, you have to set
  • url_prefix_action to "action", cf. HelpOnConfiguration