stadiumbuilds.io
robots.txt

Robots Exclusion Standard data for stadiumbuilds.io

Resource Scan

Scan Details

Site Domain stadiumbuilds.io
Base Domain stadiumbuilds.io
Scan Status Ok
Last Scan2025-12-08T16:20:33+00:00
Next Scan 2025-12-15T16:20:33+00:00

Last Scan

Scanned2025-12-08T16:20:33+00:00
URL https://stadiumbuilds.io/robots.txt
Domain IPs 13.215.239.219, 52.74.6.109
Response IP 52.74.6.109
Found Yes
Hash 254c06f2b7121b8723cb2081c1e2f0db86de8606d4fa74ddc792f1fa40fd4d05
SimHash dd9eda51ee2b

Groups

*

Rule Path
Allow /
Disallow /api/
Disallow /og/
Disallow /data/
Disallow /supabase/functions/
Disallow /.netlify/
Disallow /netlify/
Disallow /auth/
Disallow /login
Disallow /register
Disallow /reset-password
Disallow /verify-email
Disallow /dashboard/
Disallow /settings/
Disallow /account/
Disallow /analytics
Disallow /.env
Disallow /*.json$
Disallow /*.sql$
Disallow /scripts/
Disallow /test/
Disallow /.bolt/
Disallow /.github/
Disallow /.lighthouseci/
Disallow /backups/
Disallow /temp/
Disallow /.temp/
Disallow /crowdin-daily-sync-backup/
Disallow /documentation/
Disallow /docs/
Allow /manifest.json
Allow /ads.txt
Allow /sw.js

Other Records

Field Value
crawl-delay 1

googlebot

Rule Path
Allow /

Other Records

Field Value
crawl-delay 0

bingbot

Rule Path
Allow /

Other Records

Field Value
crawl-delay 1

ahrefsbot

Rule Path
Disallow /

semrushbot

Rule Path
Disallow /

dotbot

Rule Path
Disallow /

mj12bot

Rule Path
Disallow /

gptbot

Rule Path
Allow /
Disallow /dashboard/
Disallow /api/
Disallow /auth/

ccbot

Rule Path
Allow /

chatgpt-user

Rule Path
Allow /
Disallow /dashboard/
Disallow /api/

claude-web

Rule Path
Allow /
Disallow /dashboard/
Disallow /api/

Other Records

Field Value
sitemap https://stadiumbuilds.io/sitemap.xml

Comments

  • Robots.txt for Stadium Builds
  • https://stadiumbuilds.io
  • Allow all crawlers by default
  • Block API endpoints and functions
  • Block authentication and account pages
  • Block user account areas
  • Block temporary and system files
  • Block backup and temporary directories
  • Block documentation (if you don't want it indexed)
  • Allow specific important files
  • Sitemap location
  • Crawl delay (optional - helps prevent server overload)
  • Specific bot rules
  • Block bad bots (optional security measure)
  • GPTBot (OpenAI's web crawler)
  • Common AI/ML crawlers