magicaiz.com
robots.txt

Robots Exclusion Standard data for magicaiz.com

Resource Scan

Scan Details

Site Domain magicaiz.com
Base Domain magicaiz.com
Scan Status Failed
Failure StageFetching resource.
Failure ReasonCouldn't connect to server.
Last Scan2025-12-05T13:36:40+00:00
Next Scan 2026-03-05T13:36:40+00:00

Last Successful Scan

Scanned2025-08-07T15:18:05+00:00
URL https://magicaiz.com/robots.txt
Domain IPs 179.61.189.18, 2a02:4780:84:b523:9e65:e150:88b0:c6a8, 2a02:4780:84:f80:31a4:c9df:1993:2d79, 77.37.66.80
Response IP 93.127.187.239
Found Yes
Hash 0214f5ff66566d2f582a742a98ba17b67fe36c1fd717710e71d6c2778832edf8
SimHash 4f9adc73e565

Groups

*

Rule Path
Allow /
Disallow /admin-dashboard
Disallow /admin/
Disallow /admin/users
Disallow /dashboard
Disallow /account
Disallow /settings
Disallow /history
Disallow /subscription
Disallow /payment
Disallow /thankyou
Disallow /login
Disallow /register
Allow /
Allow /about
Allow /contact
Allow /pricing
Allow /faq
Allow /support
Allow /gallery
Allow /reviews
Allow /image-generation
Allow /text-to-speech
Allow /text-translation
Allow /ai-chat
Allow /image-to-video
Allow /image-upscaler
Allow /background-remover
Allow /background-music
Allow /music
Allow /video-footage
Allow /free-video-footage
Allow /privacy
Allow /terms
Allow /cookie-policy
Allow /refund-policy

Other Records

Field Value
crawl-delay 1

googlebot

Rule Path
Allow /

Other Records

Field Value
crawl-delay 1

bingbot

Rule Path
Allow /

Other Records

Field Value
crawl-delay 1

ahrefsbot

Rule Path
Disallow /

semrushbot

Rule Path
Disallow /

mj12bot

Rule Path
Disallow /

dotbot

Rule Path
Disallow /

blexbot

Rule Path
Disallow /

Other Records

Field Value
sitemap https://magicaiz.com/sitemap.xml

Comments

  • Artify Dreamscape Studio - Robots.txt
  • This file tells search engine crawlers which pages they can and cannot access
  • ========================================
  • DISALLOWED PAGES (Private/Admin Areas)
  • ========================================
  • Admin pages
  • User account pages (private)
  • Authentication pages
  • ========================================
  • ALLOWED PAGES (Public Content)
  • ========================================
  • Main pages
  • AI Tools (public features)
  • Media content
  • Legal pages
  • ========================================
  • SITEMAP & CRAWLING SETTINGS
  • ========================================
  • Sitemap location
  • Crawl delay (be respectful to server)
  • ========================================
  • SPECIFIC BOT RULES (Optional)
  • ========================================
  • Googlebot specific rules
  • Bingbot specific rules
  • ========================================
  • BLOCK HARMFUL BOTS
  • ========================================
  • Block common spam bots
  • ========================================
  • ADDITIONAL NOTES
  • ========================================
  • - This robots.txt allows search engines to crawl all public content
  • - Private user areas are protected from indexing
  • - Sitemap is provided for efficient crawling
  • - Crawl delay prevents server overload