shockwave.com
robots.txt

Robots Exclusion Standard data for shockwave.com

Resource Scan

Scan Details

Site Domain shockwave.com
Base Domain shockwave.com
Scan Status Ok
Last Scan2024-10-05T08:21:38+00:00
Next Scan 2024-10-12T08:21:38+00:00

Last Scan

Scanned2024-10-05T08:21:38+00:00
URL https://shockwave.com/robots.txt
Redirect https://www.shockwave.com/robots.txt
Redirect Domain www.shockwave.com
Redirect Base shockwave.com
Domain IPs 104.26.4.191, 104.26.5.191, 172.67.69.138, 2606:4700:20::681a:4bf, 2606:4700:20::681a:5bf, 2606:4700:20::ac43:458a
Redirect IPs 104.26.4.191, 104.26.5.191, 172.67.69.138, 2606:4700:20::681a:4bf, 2606:4700:20::681a:5bf, 2606:4700:20::ac43:458a
Response IP 104.26.5.191
Found Yes
Hash 5ec0a02d435f658631b2680f1f948cbc9aff2391116f27dd879f8e9dfba1508e
SimHash 28129d18cf70

Groups

*

Rule Path
Allow /images/*.gif
Allow /images/*.jpg
Allow /images/*.jpeg
Allow /images/*.png
Allow /images/*.webp
Allow /images/app/*.gif
Allow /images/app/*.jpg
Allow /images/app/*.jpeg
Allow /images/app/*.png
Allow /images/app/*.webp
Disallow /images/background.jpg
Disallow /images/
Disallow /fonts/
Disallow /scripts/
Disallow /account/
Disallow /account/register/
Disallow /account/password/
Disallow /account/reset/
Disallow /api/
Disallow /*_buildManifest.js$
Disallow /*_middlewareManifest.js$
Disallow /*_ssgManifest.js$
Disallow /*?*

Comments

  • robots.txt
  • This file is to prevent the crawling and indexing of certain parts
  • of your site by web crawlers and spiders run by sites like Yahoo!
  • and Google. By telling these "robots" where not to go on your site,
  • you save bandwidth and server resources.
  • This file will be ignored unless it is at the root of your host:
  • Used: http://example.com/robots.txt
  • Ignored: http://example.com/site/robots.txt
  • For more information about the robots.txt standard, see:
  • http://www.robotstxt.org/robotstxt.html
  • Images
  • Directories
  • Paths (clean URLs)
  • Next.JS Crawl Budget Performance Updates