ugetsu.strikingly.com
robots.txt

Robots Exclusion Standard data for ugetsu.strikingly.com

Resource Scan

Scan Details

Site Domain ugetsu.strikingly.com
Base Domain strikingly.com
Scan Status Ok
Last Scan2024-06-27T20:54:20+00:00
Next Scan 2024-07-27T20:54:20+00:00

Last Scan

Scanned2024-06-27T20:54:20+00:00
URL https://ugetsu.strikingly.com/robots.txt
Redirect https://ugetsu.mystrikingly.com/robots.txt
Redirect Domain ugetsu.mystrikingly.com
Redirect Base mystrikingly.com
Domain IPs 13.33.88.100, 13.33.88.116, 13.33.88.36, 13.33.88.85, 2600:9000:223b:1e00:17:9ce9:ae40:93a1, 2600:9000:223b:4e00:17:9ce9:ae40:93a1, 2600:9000:223b:6200:17:9ce9:ae40:93a1, 2600:9000:223b:7c00:17:9ce9:ae40:93a1, 2600:9000:223b:8800:17:9ce9:ae40:93a1, 2600:9000:223b:a00:17:9ce9:ae40:93a1, 2600:9000:223b:a200:17:9ce9:ae40:93a1, 2600:9000:223b:f600:17:9ce9:ae40:93a1
Redirect IPs 52.84.150.39, 52.84.150.43, 52.84.150.45, 52.84.150.63
Response IP 52.84.150.43
Found Yes
Hash be4747e4118d37e47d7041b6b09a0b9a51e8feacbfe0727295af2cefaaeb510b
SimHash aa8d6dad6450

Groups

semrushbot

Rule Path
Disallow /

blackwidow

Rule Path
Disallow /

Other Records

Field Value
sitemap https://ugetsu.mystrikingly.com/sitemap.xml

Comments

  • See http://www.robotstxt.org/wc/norobots.html for documentation on how to use the robots.txt file
  • To ban all spiders from the entire site uncomment the next two lines:
  • User-Agent: *
  • Disallow: /