dingowebworks.com
robots.txt

Robots Exclusion Standard data for dingowebworks.com

Resource Scan

Scan Details

Site Domain dingowebworks.com
Base Domain dingowebworks.com
Scan Status Ok
Last Scan2024-09-16T08:47:36+00:00
Next Scan 2024-10-16T08:47:36+00:00

Last Scan

Scanned2024-09-16T08:47:36+00:00
URL https://dingowebworks.com/robots.txt
Domain IPs 104.26.0.65, 104.26.1.65, 172.67.72.116, 2606:4700:20::681a:141, 2606:4700:20::681a:41, 2606:4700:20::ac43:4874
Response IP 172.67.72.116
Found Yes
Hash 7ef9ca5668105e619daedff0fe33502daf81f63434afa6788d77effa5048c476
SimHash 39b55d216555

Groups

*

Rule Path
Disallow

Comments

  • ****************************************************************************
  • robots.txt
  • : Robots, spiders, and search engines use this file to detmine which
  • content they should *not* crawl while indexing your website.
  • : This system is called "The Robots Exclusion Standard."
  • : It is strongly encouraged to use a robots.txt validator to check
  • for valid syntax before any robots read it!
  • Examples:
  • Instruct all robots to stay out of the admin area.
  • : User-agent: *
  • : Disallow: /admin/
  • Restrict Google and MSN from indexing your images.
  • : User-agent: Googlebot
  • : Disallow: /images/
  • : User-agent: MSNBot
  • : Disallow: /images/
  • ****************************************************************************