belrtl.be
robots.txt

Robots Exclusion Standard data for belrtl.be

Resource Scan

Scan Details

Site Domain belrtl.be
Base Domain belrtl.be
Scan Status Ok
Last Scan2024-11-10T21:09:05+00:00
Next Scan 2024-11-17T21:09:05+00:00

Last Scan

Scanned2024-11-10T21:09:05+00:00
URL https://belrtl.be/robots.txt
Redirect https://www.belrtl.be/robots.txt
Redirect Domain www.belrtl.be
Redirect Base belrtl.be
Domain IPs 2.57.173.54
Redirect IPs 2.57.173.54
Response IP 2.57.173.54
Found Yes
Hash 88ce67637f2cbeedad65581d4aded56e932f359fe1eae2c8ba986921115d2dce
SimHash a8929d13c564

Groups

mediapartners-google
googlebot
googlebot-image
googlebot-mobile
googlebot-news
googlebot-video
adsbot-google
googlebot_nauxeo
bingbot
twitterbot
applebot
bingbot
facebot
siteauditbot
screaming frog seo spider
publication-access-for-facebook
facebookexternalhit
flipboard
flipboardproxy
upday

Rule Path
Allow /*.css$
Allow /*.css?
Allow /*.js$
Allow /*.js?
Allow /*.gif
Allow /*.jpg
Allow /*.jpeg
Allow /*.png
Allow /*.webp
Disallow /cws/
Disallow /admin/
Disallow /api/
Disallow */api
Disallow /README.md

gptbot

Rule Path
Disallow /

Comments

  • robots.txt
  • This file is to prevent the crawling and indexing of certain parts
  • of your site by web crawlers and spiders run by sites like Yahoo!
  • and Google. By telling these "robots" where not to go on your site,
  • you save bandwidth and server resources.
  • This file will be ignored unless it is at the root of your host:
  • Used: http://example.com/robots.txt
  • Ignored: http://example.com/site/robots.txt
  • For more information about the robots.txt standard, see:
  • http://www.robotstxt.org/robotstxt.html
  • Allowed search engines directives
  • Crawl-delay: 10
  • CSS, JS, Images
  • Directories
  • Disallowed search engines