link.me
robots.txt

Robots Exclusion Standard data for link.me

Resource Scan

Scan Details

Site Domain link.me
Base Domain link.me
Scan Status Ok
Last Scan2025-10-26T04:29:07+00:00
Next Scan 2025-11-02T04:29:07+00:00

Last Scan

Scanned2025-10-26T04:29:07+00:00
URL https://link.me/robots.txt
Domain IPs 104.18.4.166, 104.18.5.166, 2606:4700::6812:4a6, 2606:4700::6812:5a6
Response IP 104.18.5.166
Found Yes
Hash 412aa363009943b6a7f5fb21053121de53acc35ee8e9794267dec00cf0309475
SimHash a0475301c5b1

Groups

applebot

Rule Path
Allow /

baiduspider

Rule Path
Allow /

bingbot

Rule Path
Allow /

discordbot

Rule Path
Allow /

facebookexternalhit

Rule Path
Allow /

googlebot

Rule Path
Allow /

googlebot-image

Rule Path
Allow /

google-inspectiontool

Rule Path
Allow /

ia_archiver

Rule Path
Allow /

linkedinbot

Rule Path
Allow /

msnbot

Rule Path
Allow /

naverbot

Rule Path
Allow /

screaming frog seo spider

Rule Path
Allow /

seznambot

Rule Path
Allow /

slurp

Rule Path
Allow /

teoma

Rule Path
Allow /

telegrambot

Rule Path
Allow /

twitterbot

Rule Path
Allow /

yandex

Rule Path
Allow /

yeti

Rule Path
Allow /

snapchatadsbot

Rule Path
Allow /

semrushbot

Rule Path
Allow /

pinterestbot

Rule Path
Allow /

*

Rule Path
Disallow /

Other Records

Field Value
sitemap https://link.me/sitemap.xml
sitemap https://link.me/users-sitemap.xml

Comments

  • We allow everything from the root
  • and then set noindex on our backends for paths that we want to Disallow
  • to prevent this list from being too complex and long