Your knowledge sources are real systems of record (not scattered PDFs and tribal Slack threads) with named owners. | Teams are burning time re-answering the same questions (runbooks, policies, system behavior, account history). | Your docs are stale, politically disputed, or lack owners, so retrieval just scales confusion. |
You can map and enforce access controls at retrieval time (role/team/project ACLs), not just in the UI. | Search returns documents, but the real need is decision-ready synthesis with citations. | You cannot enforce access controls on retrieved content (risk of leaking private or regulated material). |
Users need answers with provenance (citations/quotes) so they can verify and correct quickly. | You can start with one domain (e.g., incident response, support policy, finance close) and expand via gates. | You need legal-grade guarantees on every answer but are unwilling to invest in human review lanes. |
You can collect query logs and outcomes (clicked citations, escalations, “answer was wrong”) to drive evals. | Your organization needs traceability: who answered what, with which sources, under which policy. | You want a “chatbot” for optics rather than an operational system with eval gates and telemetry. |
You can build and maintain ingestion (connectors, chunking, dedupe, freshness) as a product, not a one-off. | You can operate a weekly cadence for evals, content fixes, and retrieval/policy updates. | You cannot budget ongoing maintenance (ingestion, evals, governance); RAG quality decays without it. |
Security can approve a threat model for prompt injection, data leakage, and tool-use boundaries (OWASP-aligned). | Teams are burning time re-answering the same questions (runbooks, policies, system behavior, account history). | Your docs are stale, politically disputed, or lack owners, so retrieval just scales confusion. |