nxt.do
robots.txt

Robots Exclusion Standard data for nxt.do

Resource Scan

Scan Details

Site Domain nxt.do
Base Domain nxt.do
Scan Status Ok
Last Scan2025-12-02T10:04:51+00:00
Next Scan 2026-01-01T10:04:51+00:00

Last Scan

Scanned2025-12-02T10:04:51+00:00
URL https://nxt.do/robots.txt
Domain IPs 15.197.129.158, 75.2.43.161, 76.223.11.49, 99.83.217.1
Response IP 76.223.11.49
Found Yes
Hash 52a434cbd9fb2c460b617e9587226a6180693ce6fd84cf32fe0ef6bf140fce1a
SimHash 64269aaae543

Groups

*

Rule Path
Allow /
Allow /en/
Allow /es/
Allow /fr/
Allow /de/
Allow /it/
Allow /pt/
Allow /pt-BR/
Allow /nl/
Allow /pl/
Allow /cs/
Allow /hu/
Allow /ro/
Allow /ru/
Allow /uk/
Allow /ar/
Allow /he/
Allow /tr/
Allow /hi/
Allow /th/
Allow /vi/
Allow /id/
Allow /ms/
Allow /ko/
Allow /ja/
Allow /zh-CN/
Allow /sv/
Allow /no/
Allow /da/
Allow /fi/
Allow /el/
Allow /fa/
Allow /assets/
Allow /*.css
Allow /*.js
Allow /*.png
Allow /*.jpg
Allow /*.jpeg
Allow /*.gif
Allow /*.svg
Allow /*.webp
Allow /*.ico
Disallow /admin/
Disallow /admin/*
Disallow /api/
Disallow /api/*
Disallow /rails/
Disallow /rails/*
Disallow /purchases/
Disallow /purchases/*
Disallow /up
Disallow /*/account-deletion
Disallow /account-deletion
Disallow /400
Disallow /404
Disallow /406
Disallow /422
Disallow /500
Disallow /tmp/
Disallow /cache/
Disallow /.well-known/

Other Records

Field Value
crawl-delay 1

googlebot

Rule Path
Allow /

bingbot

Rule Path
Allow /

slurp

Rule Path
Allow /

Other Records

Field Value
sitemap https://nxt.do/sitemap.xml

Comments

  • Robots.txt for nxt
  • See https://www.robotstxt.org/robotstxt.html for documentation
  • Allow all search engines to crawl public content
  • Allow access to main public pages and localized content
  • Allow access to static assets for proper page rendering
  • Block admin areas
  • Block API endpoints (not meant for search indexing)
  • Block Rails-specific paths
  • Block transaction and purchase endpoints
  • Block health check endpoint
  • Block account deletion page (sensitive/private)
  • Block error pages (they shouldn't be indexed)
  • Block any temporary or cache files
  • Crawl delay (be respectful to server resources)
  • Sitemap location (update this URL to your actual domain)
  • Specific rules for major search engines