wikihow.life
robots.txt

Robots Exclusion Standard data for wikihow.life

Resource Scan

Scan Details

Site Domain wikihow.life
Base Domain wikihow.life
Scan Status Ok
Last Scan2024-06-12T00:53:59+00:00
Next Scan 2024-06-19T00:53:59+00:00

Last Scan

Scanned2024-06-12T00:53:59+00:00
URL https://wikihow.life/robots.txt
Domain IPs 151.101.1.91, 151.101.129.91, 151.101.193.91, 151.101.65.91
Response IP 151.101.1.91
Found Yes
Hash 9b0fae881277a7693ecdf74a9b5b05fe7de2d8d1553c003524889afcf83674d0
SimHash e4505159cdf7

Groups

anthropic-ai

Rule Path
Disallow /

archive.org

Rule Path
Disallow /api.php
Disallow /index.php
Disallow /Special%3A

ccbot

Rule Path
Disallow /

doc

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

facebookbot

Rule Path
Disallow /

fetch

Rule Path
Disallow /

gptbot

Rule Path
Disallow /

hmse_robot

Rule Path
Disallow /

httrack

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

linko

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

npbot

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

omigilibot

Rule Path
Disallow /

perplexitybot

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

ubicrawler

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webzip

Rule Path
Disallow /

wget

Rule Path
Disallow /

xenu

Rule Path
Disallow /

zao

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

adsbot-google

Rule Path
Allow /

mediapartners-google

Rule Path
Allow /

googlebot

Rule Path
Allow /

*

Rule Path
Allow /Special%3ALSearch
Allow /Special%3AQABox
Allow /index.php?*printable
Disallow /index.php
Disallow /*feed%3Drss
Disallow /*action%3Ddelete
Disallow /*action%3Dhistory
Disallow /*action%3Dwatch
Disallow /Special%3A
Disallow /*platform%3D
Disallow /*variant%3D

Comments

  • robots.txt
  • based on wikipedia.org's robots.txt
  • If your bot supports such a thing using the 'Crawl-delay' or another
  • instruction, please let us know. We can add it to our robots.txt.
  • Friendly, low-speed bots are welcome viewing article pages, but not
  • dynamically-generated pages please. Article pages contain our site's
  • real content.
  • Doesn't follow robots.txt anyway, but...
  • Requests many pages per second
  • http://www.nameprotect.com/botinfo.html
  • Some bots are known to be trouble, particularly those designed to copy
  • entire sites. Please obey robots.txt.
  • A capture bot, downloads gazillions of pages with no public benefit
  • http://www.webreaper.net/
  • wget in recursive mode uses too many resources for us.
  • Please read the man page and use it properly; there is a
  • --wait option you can use to set the delay between hits,
  • for instance. Please wait 3 seconds between each request.