expertscolumn.com
robots.txt

Robots Exclusion Standard data for expertscolumn.com

Resource Scan

Scan Details

Site Domain expertscolumn.com
Base Domain expertscolumn.com
Scan Status Failed
Failure StageFetching resource.
Failure ReasonCouldn't establish SSL connection.
Last Scan2025-09-20T07:34:51+00:00
Next Scan 2025-12-19T07:34:51+00:00

Last Successful Scan

Scanned2024-11-25T03:55:22+00:00
URL https://expertscolumn.com/robots.txt
Domain IPs 209.127.178.206
Response IP 209.127.178.206
Found Yes
Hash dc0b165d4814765f80b81953c0d46d9ec599371ae3984246fb1c0c4d13df8a02
SimHash aa90bd02cd74

Groups

*

Rule Path
Disallow /activity/
Disallow /bootstrap/
Disallow /cekeditor/
Disallow /ckeditor3/
Disallow /ebooks/
Disallow /fitnesstips/
Disallow /Geoip/
Disallow /image/
Disallow /image12/
Disallow /image123/
Disallow /img/
Disallow /offers/
Disallow /cpa/
Disallow /phpdiff/
Disallow /rating/
Disallow /smtp/
Disallow /sub-zip/
Disallow /subdomains/
Disallow /Swift-5.1.0/
Disallow /tutorials/
Disallow /weblogs/
Disallow /dash2/
Disallow /profdesign/
Disallow /404.php
Disallow /logout.php
Disallow /maintenance.php
Disallow /welcome

Comments

  • $Id: robots.txt,v 1.9.2.1 2008/12/10 20:12:19 goba Exp $
  • robots.txt
  • This file is to prevent the crawling and indexing of certain parts
  • of your site by web crawlers and spiders run by sites like Yahoo!
  • and Google. By telling these "robots" where not to go on your site,
  • you save bandwidth and server resources.
  • This file will be ignored unless it is at the root of your host:
  • Used: http://example.com/robots.txt
  • Ignored: http://example.com/site/robots.txt
  • For more information about the robots.txt standard, see:
  • http://www.robotstxt.org/wc/robots.html
  • For syntax checking, see:
  • http://www.sxw.org.uk/computing/robots/check.html
  • directories
  • Added on 27-05-2014 to disallow any dynamic link on base url
  • Disallow: /*?tpages*
  • Disallow: /*?*