baike.com
robots.txt

Robots Exclusion Standard data for baike.com

Resource Scan

Scan Details

Site Domain baike.com
Base Domain baike.com
Scan Status Ok
Last Scan2024-04-09T18:12:47+00:00
Next Scan 2024-05-09T18:12:47+00:00

Last Scan

Scanned2024-04-09T18:12:47+00:00
URL https://baike.com/robots.txt
Redirect https://www.baike.com/robots.txt
Redirect Domain www.baike.com
Redirect Base baike.com
Domain IPs 122.14.229.15, 122.14.229.17
Redirect IPs 163.181.81.27, 163.181.81.28, 163.181.81.29, 163.181.81.30, 163.181.81.31, 163.181.81.32, 163.181.81.33, 163.181.81.34
Response IP 163.181.81.33
Found Yes
Hash 04780867d8ec34230f97b49bad7e5bc487026baec44f6d0acff120f6334806d8
SimHash a612715bcff7

Groups

ubicrawler

Rule Path
Disallow /

zao

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

wget

Rule Path
Disallow /

grub-client

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

npbot

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

baiduspider

Rule Path
Disallow /

baiduspider

Rule Path
Disallow /

*

Rule Path
Allow /
Disallow /edit_community
Disallow /search
Disallow /user
Disallow /redirect_link
Disallow /snapshot_page
Disallow /editor
Disallow /task_center
Disallow /tcs
Disallow /history

Comments

  • Some bots are known to be trouble, particularly those designed to copy
  • entire sites. Please obey robots.txt.
  • Sorry, wget in its recursive mode is a frequent problem.
  • Please read the man page and use it properly; there is a
  • --wait option you can use to set the delay between hits,
  • for instance.
  • The 'grub' distributed client has been *very* poorly behaved.
  • Doesn't follow robots.txt anyway, but...
  • Hits many times per second, not acceptable
  • http://www.nameprotect.com/botinfo.html
  • A capture bot, downloads gazillions of pages with no public benefit
  • http://www.webreaper.net/