buddyme.me
robots.txt

Robots Exclusion Standard data for buddyme.me

Resource Scan

Scan Details

Site Domain buddyme.me
Base Domain buddyme.me
Scan Status Ok
Last Scan2025-10-08T14:30:53+00:00
Next Scan 2025-10-15T14:30:53+00:00

Last Scan

Scanned2025-10-08T14:30:53+00:00
URL https://buddyme.me/robots.txt
Domain IPs 104.21.13.173, 172.67.200.222, 2606:4700:3033::6815:dad, 2606:4700:3033::ac43:c8de
Response IP 172.67.200.222
Found Yes
Hash a744e7571e5cd5e1cd635b9394ae2f579afdf4cf11506551e2fcbf61af4a85c7
SimHash 736c4fa9f459

Groups

*

Rule Path
Disallow /demo/
Disallow /demo/*
Disallow /blogposts/
Disallow /blogposts/*
Disallow /spritzerrallye/
Disallow /spritzerrallye/*
Disallow /preise/
Disallow /preise/*
Disallow /users/
Disallow /users/*

yandex

Rule Path
Disallow /

baiduspider

Rule Path
Disallow /

fatbot

Rule Path
Disallow /

fatbot 2.0

Rule Path
Disallow /

ahrefsbot

Rule Path
Disallow /

proximic

Rule Path
Disallow /

sogou spider

Rule Path
Disallow /

sogou web spider

Rule Path
Disallow /

grapeshot

Rule Path
Disallow /

mj12bot

Rule Path
Disallow /

blexbot

Rule Path
Disallow /

semrushbot

Rule Path
Disallow /

dotbot

Rule Path
Disallow /

petalbot

Rule Path
Disallow /

criteobot/0.1

Rule Path
Disallow /

admantx-adform

Rule Path
Disallow /

ias-ir/3.1

Rule Path
Disallow /

ias-or/3.1

Rule Path
Disallow /

ias-va/3.1

Rule Path
Disallow /

Comments

  • See http://www.robotstxt.org/wc/norobots.html for documentation on how to use the robots.txt file
  • To ban all spiders from the entire site uncomment the next two lines:
  • noindex: *location=*
  • noindex: *lat=*
  • noindex: *lon=*
  • noindex: *distance=*

Warnings

  • `noindex` is not a known field.