bigcontentsearch.com
robots.txt

Robots Exclusion Standard data for bigcontentsearch.com

Resource Scan

Scan Details

Site Domain bigcontentsearch.com
Base Domain bigcontentsearch.com
Scan Status Failed
Failure StageFetching resource.
Failure ReasonServer returned a server error.
Last Scan2025-10-28T14:59:54+00:00
Next Scan 2025-12-27T14:59:54+00:00

Last Successful Scan

Scanned2025-08-06T10:15:39+00:00
URL https://www.bigcontentsearch.com/robots.txt
Domain IPs 104.21.47.243, 172.67.174.142, 2606:4700:3031::6815:2ff3, 2606:4700:3036::ac43:ae8e
Response IP 172.67.174.142
Found Yes
Hash a9d51ba67e7318d4c8d7fb728f8718d6486c3881451b19349afada11a20bcea0
SimHash bc112a014fe5

Groups

*

Rule Path
Disallow /predstavitev$
Disallow /portal_banneradmin
Disallow /portal_url
Disallow /login_form
Disallow /mail_password_form

googlebot

Rule Path
Disallow /*sendto_form$
Disallow /*folder_factories$
Disallow /predstavitev$
Disallow /portal_banneradmin
Disallow /portal_url
Disallow /login_form
Disallow /mail_password_form

Comments

  • Define access-restrictions for robots/spiders
  • http://www.robotstxt.org/wc/norobots.html
  • Exclude a few urls that shouldn't show up in search results
  • Add Googlebot-specific syntax extension to exclude forms
  • that are repeated for each piece of content in the site
  • the wildcard is only supported by Googlebot
  • http://www.google.com/support/webmasters/bin/answer.py?answer=40367&ctx=sibling
  • Also repeat exclusions above otherwise they will NOT have any effect on Googlebot!