wiki.bmc.com
robots.txt

Robots Exclusion Standard data for wiki.bmc.com

Resource Scan

Scan Details

Site Domain wiki.bmc.com
Base Domain bmc.com
Scan Status Ok
Last Scan2025-12-14T05:07:41+00:00
Next Scan 2026-01-13T05:07:41+00:00

Last Scan

Scanned2025-12-14T05:07:41+00:00
URL https://wiki.bmc.com/robots.txt
Domain IPs 178.32.198.83, 92.222.135.245
Response IP 178.32.198.83
Found Yes
Hash f1fa1772c21f7c07c587c91014230b4b9ccbc0eca6f0c6a5191973d4dc976876
SimHash 717649308571

Groups

*

Rule Path
Disallow */viewattachrev/
Disallow */viewrev/
Disallow */pdf/
Disallow */tex/
Disallow */edit/
Disallow */create/
Disallow */inline/
Disallow */preview/
Disallow */save/
Disallow */saveandcontinue/
Disallow */rollback/
Disallow */deleteversions/
Disallow */cancel/
Disallow */delete/
Disallow */deletespace/
Disallow */undelete/
Disallow */reset/
Disallow */register/
Disallow */propupdate/
Disallow */propadd/
Disallow */propdisable/
Disallow */propenable/
Disallow */propdelete/
Disallow */objectadd/
Disallow */commentadd/
Disallow */commentsave/
Disallow */objectsync/
Disallow */objectremove/
Disallow */attach/
Disallow */upload/
Disallow */download/
Disallow */temp/
Disallow */downloadrev/
Disallow */dot/
Disallow */svg/
Disallow */delattachment/
Disallow */skin/
Disallow */jsx/
Disallow */ssx/
Disallow */login/
Disallow */loginsubmit/
Disallow */loginerror/
Disallow */logout/
Disallow */charting/
Disallow */lock/
Disallow */redirect/
Disallow */admin/
Disallow */export/
Disallow */import/
Disallow */get/
Disallow */distribution/
Disallow */imagecaptcha/
Disallow */unknown/
Disallow */webjars/
Disallow */resources/
Disallow */Sandbox/
Disallow */Admin/
Disallow */Stats/
Disallow */Panels/
Disallow */asyncrenderer/uix/*Panel*
Disallow */Main/Search
Disallow */XWiki/XWikiGuest
Disallow */XWiki/superadmin
Disallow /*?*xpage=*
Disallow */view/*?*viewer=*
Disallow */rest/
Allow */*/download/*.png$
Allow */*/download/*.jpg$
Allow */*/download/*.jpeg$
Allow */*/download/*.gif$

googlebot

Rule Path
Allow */jsx/
Allow */get/*?*xpage=plain*
Allow */webjars/
Allow */ssx/
Allow */charting/
Allow */dot/
Allow */svg/
Allow */download/*.png
Allow */download/*.jpg
Allow */download/*.jpeg
Allow */download/*.gif
Allow */download/*.svg
Allow */skin/
Allow */resources/
Disallow */asyncrenderer/uix/*Panel*
Disallow */jsx/*Panel*
Disallow */ssx/*Panel*
Disallow */viewattachrev/
Disallow */viewrev/
Disallow */pdf/
Disallow */tex/
Disallow */edit/
Disallow */create/
Disallow */inline/
Disallow */preview/
Disallow */save/
Disallow */saveandcontinue/
Disallow */rollback/
Disallow */deleteversions/
Disallow */cancel/
Disallow */delete/
Disallow */deletespace/
Disallow */undelete/
Disallow */reset/
Disallow */register/
Disallow */propupdate/
Disallow */propadd/
Disallow */propdisable/
Disallow */propenable/
Disallow */propdelete/
Disallow */objectadd/
Disallow */commentadd/
Disallow */commentsave/
Disallow */objectsync/
Disallow */objectremove/
Disallow */attach/
Disallow */upload/
Disallow */download/
Disallow */temp/
Disallow */downloadrev/
Disallow */delattachment/
Disallow */login/
Disallow */loginsubmit/
Disallow */loginerror/
Disallow */logout/
Disallow */lock/
Disallow */redirect/
Disallow */admin/
Disallow */export/
Disallow */import/
Disallow */get/
Disallow */distribution/
Disallow */imagecaptcha/
Disallow */unknown/
Disallow */Sandbox/
Disallow */Admin/
Disallow */Stats/
Disallow */Panels/
Disallow */Main/Search
Disallow */XWiki/XWikiGuest
Disallow */XWiki/superadmin
Disallow /*?*xpage=*
Disallow /*?*viewer=*
Disallow */rest/
Allow */*/download/*.png$
Allow */*/download/*.jpg$
Allow */*/download/*.jpeg$
Allow */*/download/*.gif$

gptbot

Rule Path
Disallow /

chatgpt-user

Rule Path
Disallow /

google-extended

Rule Path
Disallow /

perplexitybot

Rule Path
Disallow /

anthropic-ai

Rule Path
Disallow /

claude-web

Rule Path
Disallow /

claudebot

Rule Path
Disallow /

Comments

  • By default, all web crawlers are denied access to non-view actions and UI resources (images, js, css)
  • Well known application (non-content) locations.
  • We're not interested in rendering and indexing panels.
  • XWiki virtual users that do not have profile pages. Avoid unnecessary 404 requests/errors.
  • Avoid crawling unnecesary UI elements that are not relevant for indexing and can even cause loops (like pdfoptions, etc.)
  • Index only the main page content, all other viewers are not relevant for idexing.
  • Don't index the REST API.
  • For images uploaded as attachments inside wiki pages
  • Googlebot uses a headless browser to fully render a page before indexing, so the UI resources are relevant and actually needed.
  • JS
  • CSS
  • Images
  • A bit of everything
  • We're not interested in rendering and indexing panels.
  • All other rules stay the same for Googlebot. We seem to have to add them or they will default to Allow and not inherit from the generic rules.
  • Well known application (non-content) locations.
  • XWiki virtual users that do not have profile pages. Avoid unnecessary 404 requests/errors.
  • Avoid crawling unnecesary UI elements that are not relevant for indexing and can even cause loops (like pdfoptions, etc.)
  • Index only the main page content, all other viewers are not relevant for idexing.
  • Don't index the REST API.
  • For images uploaded as attachments inside wiki pages
  • Block ChatGPT
  • Block Google AI
  • Block PerplexityBot
  • Block Anthropic AI