Your RAG pilot is a data-exfiltration pattern with a nicer UI
Every federal and healthcare client we talk to is running at least one internal LLM pilot. Most of them describe it the same way: "just a productivity tool, it's read-only, the data never leaves our tenant." Three questions later, the control gaps are the same ones we were closing on file-share permissions a decade ago — except now there is a chat window in front of them, and the chat window is very good at negotiating for data it was never supposed to see.
We are not anti-AI. Two of the programmes we support are shipping retrieval-augmented generation into production this year. We are against pretending the governance problem is new. It is not. Here are the six patterns we see most often, and the NIST AI Risk Management Framework and NIST 800-53 Rev 5 controls that close them.
1. The index inherits no access model
The pattern. Someone copies a SharePoint library, a OneDrive folder, or a shared drive into a vector index. The index has no ACL. The chatbot now answers every employee's questions from every document, including the HR investigations folder and the unredacted contract drive.
The fix. Treat the index as a system of record. Mirror source ACLs at ingest. Re-evaluate permissions at query time, not at crawl time. If the source permission changes, the embedding is invalidated within 24 hours.
Maps to: NIST AI RMF MAP-2.3, MANAGE-2.1 · NIST 800-53 AC-3, AC-4 · HIPAA §164.312(a)(1)
2. Prompt logs are the new access logs — and nobody is reading them
Every prompt is a query. Every query reveals intent. The prompt log is now the most sensitive audit artifact in the environment, and it is usually being written to a blob store with retention set to "default."
The fix. Prompt and completion logs go to the same SIEM as identity and endpoint telemetry. Name an analyst. Write three detections: data-class exfiltration attempts, privilege-escalation phrasing, and novel-source grounding. Review a sample weekly.
Maps to: NIST AI RMF MEASURE-2.7 · NIST 800-53 AU-2, AU-6, SI-4
3. Model and tool access are not in the identity inventory
The pilot connects to a third-party model, an orchestration framework, and maybe three retrieval tools. Half of those integrations are running on service accounts provisioned outside the identity platform — because provisioning them properly was "slowing down the pilot."
The fix. Every non-human identity in the AI stack sits in the same IdP as human accounts. MFA-gated where applicable, rotated on a schedule, reviewed quarterly.
Maps to: NIST AI RMF MANAGE-3.1 · NIST 800-53 IA-2, IA-5, AC-2
4. Output is treated as trusted code
A generated SQL query, Terraform plan, or email draft is pasted into a production system without a human sign-off because "the model is usually right." One wrong IAM policy later, everyone remembers "usually" is not an assurance posture.
The fix. Classify every AI-touched workflow by blast radius. For anything that writes, a human approval gate with diff review. For anything read-only, an observability hook that flags anomalous result volumes.
Maps to: NIST AI RMF GOVERN-1.3, MANAGE-2.3 · NIST 800-53 CM-3, SI-10
5. The model bill of materials is missing
When the next model vulnerability drops — and one will — the first question your regulator and your agency sponsor will ask is "what models do you run, where, and at which versions." Most pilots cannot answer that in a week.
The fix. Maintain a Model BOM with provider, version, fine-tunes, data lineage, and last risk review. Tie it to your change-management process. The format is less important than the discipline.
Maps to: NIST AI RMF MAP-4.1 · NIST 800-53 CM-8, SR-4
6. "Red-teaming" is a single vendor demo, not a cadence
One AI red-team engagement before launch does not constitute ongoing assurance. Models drift. Guardrails erode. New jailbreak techniques publish weekly.
The fix. A continuous red-team cadence against the deployed system — not the model in isolation. Quarterly at minimum for any system touching regulated data, monthly for public-facing deployments. Document the corpus of attacks and the responses.
Maps to: NIST AI RMF MEASURE-2.6, MEASURE-2.7 · NIST 800-53 CA-8, RA-5
The pattern across all six: the controls are not new. The surface area is. Access models, audit logging, identity governance, change control, asset inventory, and adversarial testing — every one of these is already in NIST 800-53. The NIST AI RMF is the mapping layer, not a replacement.
If the AI governance plan in your programme does not reference 800-53 controls that your team already operates, it is not a governance plan. It is a press release.
Next step. A 30-minute AI security scoping call with a senior practitioner will give you the two controls most likely to fail your first audit cycle, mapped to the evidence your assessor will expect. No proposal unless you ask.