fgv.br
robots.txt

Robots Exclusion Standard data for fgv.br

Resource Scan

Scan Details

Site Domain fgv.br
Base Domain fgv.br
Scan Status Failed
Failure StageFetching resource.
Failure ReasonCouldn't connect to server.
Last Scan2024-09-15T16:13:54+00:00
Next Scan 2024-12-14T16:13:54+00:00

Last Successful Scan

Scanned2023-04-29T19:47:50+00:00
URL https://fgv.br/robots.txt
Domain IPs 189.125.96.98
Response IP 189.125.96.98
Found Yes
Hash f14a7911cee3fe876466c499cdf9e3485f814454d9e3cd999cee14ca5a912059
SimHash b812954b4744

Groups

*
googlebot

Rule Path
Allow

googlebot-image

Rule Path
Allow

mediapartners-google

Rule Path
Allow

adsbot-google

Rule Path
Allow
Disallow /graduacao
Disallow /fgvonline-manutencao
Disallow /aol
Disallow /BKP
Disallow /cpdoc/acervo/App_Themes/
Disallow /cpdoc/acervo/App_Browsers/
Disallow /cpdoc/acervo/CorpoEmails/
Disallow /cpdoc/acervo/css/
Disallow /cpdoc/acervo/Erros/
Disallow /cpdoc/acervo/FreeAccess/
Disallow /cpdoc/acervo/jquery.fancybox/
Disallow /cpdoc/acervo/js/
Disallow /tic/documentos/
Disallow /logo

Comments

  • robots.txt
  • This file is to prevent the crawling and indexing of certain parts
  • of your site by web crawlers and spiders run by sites like Yahoo!
  • and Google. By telling these "robots" where not to go on your site,
  • you save bandwidth and server resources.
  • This file will be ignored unless it is at the root of your host:
  • Used: http://example.com/robots.txt
  • Ignored: http://example.com/site/robots.txt
  • For more information about the robots.txt standard, see:
  • http://www.robotstxt.org/robotstxt.html
  • Directories CPDOC
  • Directories TIC
  • Directories DICOM