it.youbianku.com
robots.txt

Robots Exclusion Standard data for it.youbianku.com

Resource Scan

Scan Details

Site Domain it.youbianku.com
Base Domain youbianku.com
Scan Status Failed
Failure StageFetching resource.
Failure ReasonServer returned a client error.
Last Scan2025-12-01T21:05:06+00:00
Next Scan 2026-03-01T21:05:06+00:00

Last Successful Scan

Scanned2024-07-17T20:26:01+00:00
URL https://it.youbianku.com/robots.txt
Domain IPs 104.26.14.88, 104.26.15.88, 172.67.71.197, 2606:4700:20::681a:e58, 2606:4700:20::681a:f58, 2606:4700:20::ac43:47c5
Response IP 172.67.71.197
Found Yes
Hash a7d31e5525cbc632b9fc056aacc0b212c80c515534fdcd064ad5a048ef072725
SimHash aa16477bede7

Groups

mediapartners-google

Rule Path
Disallow

*

Rule Path
Allow /index.php?title=%E7%89%B9%E6%AE%8A%3A%E6%9C%80%E8%BF%91%E6%9B%B4%E6%94%B9
Allow /index.php?title=%E7%89%B9%E6%AE%8A%3A%E6%9C%80%E6%96%B0%E9%A1%B5%E9%9D%A2
Allow /index.php?title=Special%3A%E6%9C%80%E8%BF%91%E6%9B%B4%E6%94%B9
Allow /index.php?title=Special%3A%E6%9C%80%E6%96%B0%E9%A1%B5%E9%9D%A2
Allow /index.php?title=Special%3ARecentchanges
Allow /index.php?title=Special%3ANewpages
Allow /index.php?title=Category%3A
Allow /index.php?title=%E5%88%86%E7%B1%BB%3A
Disallow /*MediaWiki
Disallow /Talk
Disallow /%E8%AE%A8%E8%AE%BA
Disallow /thumb.php
Disallow /index.php
Disallow /skins/
Disallow /Special
Disallow /%E7%89%B9%E6%AE%8A
Disallow /Especial
Disallow /Sp%C3%A9cial
Disallow /Sp%C3%A9cial
Disallow /%D0%A1%D0%BB%D1%83%D0%B6%D0%B5%D0%B1%D0%BD%D0%B0%D1%8F
Disallow /%D0%A1%D0%BB%D1%83%D0%B6%D0%B5%D0%B1%D0%BD%D0%B0%D1%8F
Disallow /%E7%89%B9%E5%88%A5
Disallow /%E7%89%B9%E5%88%A5
Disallow /%D8%AE%D8%A7%D8%B5
Disallow /*action%3D
Disallow /*oldid%3D
Disallow /*diff%3D
Disallow /*printable%3D
Disallow /1027280/
Disallow /cdn-cgi/

ubicrawler

Rule Path
Disallow /

doc

Rule Path
Disallow /

zao

Rule Path
Disallow /

sitecheck.internetseer.com

Rule Path
Disallow /

zealbot

Rule Path
Disallow /

msiecrawler

Rule Path
Disallow /

sitesnagger

Rule Path
Disallow /

webstripper

Rule Path
Disallow /

webcopier

Rule Path
Disallow /

fetch

Rule Path
Disallow /

offline explorer

Rule Path
Disallow /

teleport

Rule Path
Disallow /

teleportpro

Rule Path
Disallow /

webzip

Rule Path
Disallow /

linko

Rule Path
Disallow /

httrack

Rule Path
Disallow /

microsoft.url.control

Rule Path
Disallow /

xenu

Rule Path
Disallow /

larbin

Rule Path
Disallow /

libwww

Rule Path
Disallow /

zyborg

Rule Path
Disallow /

download ninja

Rule Path
Disallow /

wget

Rule Path
Disallow /

grub-client

Rule Path
Disallow /

k2spider

Rule Path
Disallow /

npbot

Rule Path
Disallow /

webreaper

Rule Path
Disallow /

Other Records

Field Value
sitemap https://it.youbianku.com/sitemap.xml
sitemap https://it.youbianku.com/rss.xml

Comments

  • jamesqi 2014-11-27 14:27
  • it.youbianku.com
  • Add Start
  • sitemap start
  • sitemap end
  • Crawl-delay: 10
  • 2022-11-15
  • 2018-12-18 comment below line because google webmaster tools can not get resouce of load.php
  • Disallow: /load.php
  • Disallow: /images/
  • 2023-4-8
  • Crawl-delay: 300 # set to 300 seconds to wait between successive requests to the same server for Yahoo Slurp
  • Request-rate: 1/10 # maximum rate is one page every 5 seconds
  • Visit-time: 0000-0800
  • Request-rate: 1/20s 1020-1200 # between 10:20 to 12:00, 1 visit in 20s
  • Add End
  • Crawlers that are kind enough to obey, but which we'd rather not have
  • unless they're feeding search engines.
  • Some bots are known to be trouble, particularly those designed to copy
  • entire sites. Please obey robots.txt.
  • Sorry, wget in its recursive mode is a frequent problem.
  • Please read the man page and use it properly; there is a
  • --wait option you can use to set the delay between hits,
  • for instance.
  • The 'grub' distributed client has been *very* poorly behaved.
  • Doesn't follow robots.txt anyway, but...
  • Hits many times per second, not acceptable
  • http://www.nameprotect.com/botinfo.html
  • A capture bot, downloads gazillions of pages with no public benefit
  • http://www.webreaper.net/
  • Don't allow the wayback-maschine to index user-pages
  • User-agent: ia_archiver
  • Disallow: /wiki/User
  • Disallow: /wiki/Benutzer
  • Friendly, low-speed bots are welcome viewing article pages, but not
  • dynamically-generated pages please.
  • Inktomi's "Slurp" can read a minimum delay between hits; if your
  • bot supports such a thing using the 'Crawl-delay' or another
  • instruction, please let us know.