wikidebates.org
robots.txt

Robots Exclusion Standard data for wikidebates.org

Resource Scan

Scan Details

Site Domain wikidebates.org
Base Domain wikidebates.org
Scan Status Ok
Last Scan2025-11-30T23:43:08+00:00
Next Scan 2025-12-30T23:43:08+00:00

Last Scan

Scanned2025-11-30T23:43:08+00:00
URL https://wikidebates.org/robots.txt
Redirect https://fr.wikidebates.org/robots.txt
Redirect Domain fr.wikidebates.org
Redirect Base wikidebates.org
Domain IPs 104.21.46.138, 172.67.139.47, 2606:4700:3030::6815:2e8a, 2606:4700:3030::ac43:8b2f
Redirect IPs 104.21.46.138, 172.67.139.47, 2606:4700:3030::6815:2e8a, 2606:4700:3030::ac43:8b2f
Response IP 104.21.46.138
Found Yes
Hash 73a11e4eb33e3e36a878f65b67876b174908993c84f887cd1d3bdb5cb60e48b9
SimHash 7d1c5d49cdf7

Groups

mediapartners-google*

Rule Path
Disallow /

israbot

Rule Path
Disallow

orthogaffe

Rule Path
Disallow

ubicrawler

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

fast

Rule Path
Disallow /

wget

Rule Path
Disallow /

grub-client

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

npbot

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

*

Rule Path
Allow /w/api.php?action=mobileview&
Allow /w/load.php?
Allow /api/rest_v1/?doc
Disallow /w/index.php?
Disallow /w/skins/
Disallow /api/
Disallow /trap/
Disallow /wiki/Special%3ARandomPage
Disallow /wiki/Special%3ASearch

Other Records

Field Value
sitemap https://de.wikidebates.org/w/sitemaps/de/sitemap-index-dewikidebates.xml
sitemap https://en.wikidebates.org/w/sitemaps/en/sitemap-index-enwikidebates.xml
sitemap https://es.wikidebates.org/w/sitemaps/es/sitemap-index-eswikidebates.xml
sitemap https://fr.wikidebates.org/w/sitemaps/fr/sitemap-index-frwikidebates.xml
sitemap https://it.wikidebates.org/w/sitemaps/it/sitemap-index-itwikidebates.xml
sitemap https://pt.wikidebates.org/w/sitemaps/pt/sitemap-index-ptwikidebates.xml

Comments

  • robots.txt from https://fr.wikipedia.org/robots.txt
  • Please note: There are a lot of pages on this site, and there are
  • some misbehaved spiders out there that go _way_ too fast. If you're
  • irresponsible, your access to the site may be blocked.
  • advertising-related bots:
  • Wikipedia work bots:
  • Crawlers that are kind enough to obey, but which we'd rather not have
  • unless they're feeding search engines.
  • Some bots are known to be trouble, particularly those designed to copy
  • entire sites. Please obey robots.txt.
  • Misbehaving: requests much too fast:
  • Sorry, wget in its recursive mode is a frequent problem.
  • Please read the man page and use it properly; there is a
  • --wait option you can use to set the delay between hits,
  • for instance.
  • The 'grub' distributed client has been *very* poorly behaved.
  • Doesn't follow robots.txt anyway, but...
  • Hits many times per second, not acceptable
  • http://www.nameprotect.com/botinfo.html
  • A capture bot, downloads gazillions of pages with no public benefit
  • http://www.webreaper.net/
  • Wayback Machine: defaults and whether to index user-pages
  • FIXME: Complete the removal of this block, per T7582.
  • User-agent: archive.org_bot
  • Allow: /
  • Friendly, low-speed bots are welcome viewing article pages, but not
  • dynamically-generated pages please.
  • Inktomi's "Slurp" can read a minimum delay between hits; if your
  • bot supports such a thing using the 'Crawl-delay' or another
  • instruction, please let us know.
  • There is a special exception for API mobileview to allow dynamic
  • mobile web & app views to load section content.
  • These views aren't HTTP-cached but use parser cache aggressively
  • and don't expose special: pages etc.
  • Another exception is for REST API documentation, located at
  • /api/rest_v1/?doc.
  • <pre>
  • partie robots.txt pour http://fr.wikipedia.org/ seulement
  • Une partie générale pour tous les sites est ajoutée au-dessus de
  • celle-ci dans http://fr.wikipedia.org/robots.txt
  • Merci de vérifier chaque modification avec un vérificateur de syntaxe
  • comme http://tool.motoricerca.info/robots-checker.phtml
  • Entrez http://fr.wikipedia.org/robots.txt comme URL à vérifier.
  • ------------------------------------------------------------------------
  • Nom localisé des pages spéciales
  • </pre>