viasat.com
robots.txt

Robots Exclusion Standard data for viasat.com

Resource Scan

Scan Details

Site Domain viasat.com
Base Domain viasat.com
Scan Status Ok
Last Scan2024-06-09T05:11:13+00:00
Next Scan 2024-06-23T05:11:13+00:00

Last Scan

Scanned2024-06-09T05:11:13+00:00
URL https://viasat.com/robots.txt
Domain IPs 3.165.102.100, 3.165.102.104, 3.165.102.112, 3.165.102.22
Response IP 3.165.102.100
Found Yes
Hash 6b6e593958a9254a95a9b0e83f7ae7f9ab0b3294f6ab0069d744c240265512e0
SimHash a8129d13c764

Groups

*

Rule Path
Allow /

Other Records

Field Value
crawl-delay 10

Other Records

Field Value
sitemap https://www.viasat.com/sitemap.xml
sitemap https://www.viasat.com/en-qa/sitemap.xml
sitemap https://www.viasat.com/pt-br/sitemap.xml
sitemap https://www.viasat.com/es-mx/sitemap.xml
sitemap https://www.viasat.com/es-es/sitemap.xml

Comments

  • robots.txt
  • This file is to prevent the crawling and indexing of certain parts
  • of your site by web crawlers and spiders run by sites like Yahoo!
  • and Google. By telling these "robots" where not to go on your site,
  • you save bandwidth and server resources.
  • This file will be ignored unless it is at the root of your host:
  • Used: http://example.com/robots.txt
  • Ignored: http://example.com/site/robots.txt
  • For more information about the robots.txt standard, see:
  • http://www.robotstxt.org/robotstxt.html