swmc.org.uk
robots.txt

Robots Exclusion Standard data for swmc.org.uk

Resource Scan

Scan Details

Site Domain swmc.org.uk
Base Domain swmc.org.uk
Scan Status Ok
Last Scan2026-01-28T07:44:38+00:00
Next Scan 2026-02-27T07:44:38+00:00

Last Scan

Scanned2026-01-28T07:44:38+00:00
URL https://swmc.org.uk/robots.txt
Domain IPs 104.21.65.182, 172.67.165.76, 2606:4700:3035::ac43:a54c, 2606:4700:3036::6815:41b6
Response IP 172.67.165.76
Found Yes
Hash c733eb974d98ee12155ad81bc640d82226f551219a8ab327b2763fb5b5832a81
SimHash be165199cef4

Groups

*

Rule Path
Disallow /g_book/index.php?

ahrefsbot

Rule Path
Disallow /

accelobot

Rule Path
Disallow /

sosospider

Rule Path
Disallow /

blexbot

Rule Path
Disallow /

mj12bot

Rule Path
Disallow /

mediapartners-google*

Rule Path
Disallow /

israbot

Rule Path
Disallow

orthogaffe

Rule Path
Disallow

ubicrawler

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

xovibot

Rule Path
Disallow /

fast

Rule Path
Disallow /

wget

Rule Path
Disallow /

grub-client

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

npbot

Rule Path
Disallow /

*

Rule Path
Disallow /includes/
Disallow /misc/
Disallow /modules/
Disallow /profiles/
Disallow /scripts/
Disallow /themes/
Disallow /%3Fq%3Den/google*
Disallow /CHANGELOG.txt
Disallow */CHANGELOG.txt/*
Disallow /cron.php
Disallow /INSTALL.mysql.txt
Disallow /INSTALL.pgsql.txt
Disallow /install.php
Disallow /INSTALL.txt
Disallow /LICENSE.txt
Disallow /MAINTAINERS.txt
Disallow /update.php
Disallow /UPGRADE.txt
Disallow /xmlrpc.php
Disallow /admin/
Disallow /comment/reply/
Disallow /logout/
Disallow /node/add/
Disallow /search/
Disallow /user/register/
Disallow /user/password/
Disallow /user/login/
Disallow /INSTALL.mysql.txt/
Disallow /INSTALL.pgsql.txt/
Disallow /?q=admin%2F
Disallow /?q=comment%2Freply%2F
Disallow /?q=logout%2F
Disallow /?q=node%2Fadd%2F
Disallow /?q=search%2F
Disallow /?q=user%2Fpassword%2F
Disallow /?q=user%2Fregister%2F
Disallow /?q=user%2Flogin%2F
Disallow /%3Fq%3Den/INSTALL.pgsql.txt/
Disallow /%3Fq%3Den/COPYRIGHT.txt/
Disallow /%3Fq%3Den/INSTALL.txt/
Disallow /%3Fq%3Den/LICENSE.txt/
Disallow /%3Fq%3Den/MAINTAINERS.txt/
Disallow /%3Fq%3Den/docs/
Disallow /%3Fq%3Den/UPGRADE.txt/
Disallow /%3Fq%3Den/*
Disallow /g_book/index.php?
Disallow /g_book/index.php/Help
Disallow /g_book/index.php/MediaWiki

Other Records

Field Value
crawl-delay 15

Comments

  • robots.txt
  • This file is to prevent the crawling and indexing of certain parts
  • of your site by web crawlers and spiders run by sites like Yahoo!
  • and Google. By telling these "robots" where not to go on your site,
  • you save bandwidth and server resources.
  • This file will be ignored unless it is at the root of your host:
  • Used: http://example.com/robots.txt
  • Ignored: http://example.com/site/robots.txt
  • For more information about the robots.txt standard, see:
  • http://www.robotstxt.org/wc/robots.html
  • For syntax checking, see:
  • http://www.sxw.org.uk/computing/robots/check.html
  • We have beautified URLs - will this help? Let's test,
  • advertising-related bots:
  • Wikipedia work bots:
  • Crawlers that are kind enough to obey, but which we'd rather not have
  • unless they're feeding search engines.
  • Some bots are known to be trouble, particularly those designed to copy
  • entire sites. Please obey robots.txt.
  • Misbehaving: requests much too fast:
  • Sorry, wget in its recursive mode is a frequent problem.
  • Please read the man page and use it properly; there is a
  • --wait option you can use to set the delay between hits,
  • for instance.
  • The 'grub' distributed client has been *very* poorly behaved.
  • Doesn't follow robots.txt anyway, but...
  • Hits many times per second, not acceptable
  • http://www.nameprotect.com/botinfo.html
  • Directories
  • Files
  • Paths (clean URLs)
  • Paths (no clean URLs)
  • Try to restrict bots to Mediawiki article pages only
  • Disallow /wiki/Help
  • Disallow: /wiki/MediaWiki
  • Disallow: /wiki/Special:
  • Disallow: /wiki/Template:
  • Disallow: /wiki/skins/

Warnings

  • 2 invalid lines.