justdivorced.com
robots.txt

Robots Exclusion Standard data for justdivorced.com

Resource Scan

Scan Details

Site Domain justdivorced.com
Base Domain justdivorced.com
Scan Status Ok
Last Scan2026-02-23T04:24:18+00:00
Next Scan 2026-03-02T04:24:18+00:00

Last Scan

Scanned2026-02-23T04:24:18+00:00
URL https://justdivorced.com/robots.txt
Redirect https://www.justdivorced.com/robots.txt
Redirect Domain www.justdivorced.com
Redirect Base justdivorced.com
Domain IPs 191.101.228.87, 2a02:4780:15:ec6f:cdb4:12d7:5333:47d1, 2a02:4780:38:af65:c5bb:14f0:5e65:1527, 77.37.48.195
Redirect IPs 2a02:4780:84:74d:4c8:22fa:907e:8653, 2a02:4780:84:9aed:5ced:eab6:3af3:4277, 84.32.84.184, 84.32.84.220
Response IP 77.37.48.163
Found Yes
Hash 37abd7979b6ed63213ddad49dc5e7e090b9154a1bc160c6d729e04a037b6e2f5
SimHash 3d3d155e4745

Groups

googlebot

Rule Path
Disallow

googlebot-image

Rule Path
Disallow

mediapartners-google

Rule Path
Disallow
Disallow /admin/
Disallow /cgi-bin/
Disallow /ssl/
Disallow /tmp/
Disallow /*?*
Disallow /*?
Disallow /*~*
Disallow /*~
Disallow ow_version.xml
Disallow INSTALL.txt
Disallow LICENSE.txt
Disallow README.txt
Disallow UPDATE.txt
Disallow CHANGES.txt

Other Records

Field Value
sitemap https://www.justdivorced.com/sitemap.xml

Comments

  • This file contains rules to prevent the crawling and indexing of certain parts
  • of your web site by spiders of a major search engines likes Google and Yahoo.
  • By managing these rules you can allow or disallow access to specific folders
  • and files for such spyders.
  • The good way to hide private data or save a lot of bandwidth.
  • For more information about the robots.txt standard, see:
  • http://www.robotstxt.org/wc/robots.html
  • For syntax checking, see:
  • http://www.sxw.org.uk/computing/robots/check.html