NIST AI RMF 15 min read

NIST AI RMF Govern Function Implementation for Compliance Teams

J

Jared Clark

April 12, 2026 · JD, RAC, ISO/GMP Expert

There's a pattern I see consistently with regulated organizations that are trying to stand up an AI risk program. They download the NIST AI RMF, look at the four functions — Govern, Map, Measure, Manage — and head straight for MAP. They want to get their AI systems cataloged. They want risk assessments running. They want something to show in an audit binder. GOVERN gets treated as background reading, something you come back to once the program is running.

The problem is that MAP, MEASURE, and MANAGE cannot do meaningful work without GOVERN in place. If you don't have a defined risk tolerance, your risk assessments have no benchmark to measure against. If you don't have clear accountability structures, your incident response has no clear owner. If you don't have policy, every AI decision is improvised. In regulated industries — pharma, medical devices, financial services, healthcare — the stakes of that improvisation are not theoretical. Regulators are increasingly referencing NIST AI RMF in enforcement guidance, and the GOVERN function is where that scrutiny lands first.

This article is for compliance teams that are ready to build GOVERN properly — not as a checkbox, but as the organizational container that makes everything else work.


What the GOVERN Function Actually Is

GOVERN is what NIST calls a "cross-cutting" function, which means it applies to every AI system in your organization, not just the high-risk ones. The other three functions — MAP, MEASURE, MANAGE — are applied system by system, deployment by deployment. GOVERN is the organizational foundation underneath all of them. You build it once, and every AI system you deploy sits on top of it.

The framework organizes GOVERN into six core categories:

  • GOVERN 1 — Organizational context and AI risk management strategy (risk tolerance, legal and regulatory requirements, organizational values)
  • GOVERN 2 — AI risk management workforce competencies (roles, training, accountability structures)
  • GOVERN 3 — Organizational teams and accountability structures (who owns AI decisions and what that ownership looks like)
  • GOVERN 4 — Organizational teams and governance, risk management, and monitoring processes (how decisions get made and reviewed)
  • GOVERN 5 — Policies, processes, procedures, and practices (the documented system that governs AI activity)
  • GOVERN 6 — Policies and processes for third-party AI risk management (vendor and partner governance)

One addition worth noting: NIST AI 600-1, published in July 2024, added GOVERN 1.7 specifically for generative AI decommissioning. Organizations running GenAI applications need this subcategory in scope — the original AI RMF 1.0 predates widespread enterprise GenAI adoption and does not address the specific risks of decommissioning a large language model or embedded AI tool.

The key distinction between GOVERN and the other functions is not complexity — it's sequence. GOVERN creates the container. MAP, MEASURE, and MANAGE fill it. If the container doesn't exist, outputs from the other three functions have nowhere to land. Risk assessments without an approved risk tolerance are just documents. Incident reports without a defined escalation path are just records. They don't drive decisions, because no one has defined whose decision to make.


Why GOVERN Has to Come First

The compounding problem with skipping GOVERN is that the damage isn't immediately obvious. Your team runs AI risk assessments. You document systems in a registry. You develop monitoring procedures. Everything looks like progress. Then something goes wrong — a model produces a biased output, a vendor AI system behaves unexpectedly, an audit surfaces an undocumented use case — and the governance gap becomes impossible to ignore. Who owns this? What's our risk tolerance? What policy covers this? No one can answer cleanly.

Without a defined risk tolerance, risk assessments are measurements without a scale. Saying a system poses "high" risk means nothing if the organization hasn't decided what level of risk is acceptable for which types of decisions. A pharmaceutical company deploying AI for adverse event detection has a fundamentally different risk tolerance than a marketing team deploying AI for email personalization. GOVERN 1.1 requires you to articulate that distinction in writing, with executive sign-off. Until that document exists, every risk score your team produces is provisional.

Without clear accountability, AI incidents fall into organizational gaps. In my experience, the most common response to an AI-related incident in organizations without GOVERN in place is a meeting that produces confusion about who should have been watching this. That meeting is expensive. It's expensive in time, in regulatory exposure if a reportable event is delayed, and in trust — both internal and external.

Without policy, your team improvises. Every time an employee uses an AI tool for a new purpose, every time a vendor deploys an AI feature in a platform you use, every time a developer proposes a new model, the answer depends on whoever is in the room. That's not governance. It's consensus-by-default, and it doesn't hold up under scrutiny.

If each downstream function — MAP, MEASURE, MANAGE — is 90% effective on its own, but the organization has no container to absorb and act on its outputs, the system still fails. GOVERN is not overhead. It's what makes the investment in everything else recoverable.


Implementation Phases for Compliance Teams

What I've found works — and what I've refined across enough regulated-industry implementations to trust — is a phased build that runs approximately 16 weeks before reaching a stable operational state. That's not a long time. It's one quarter of sustained attention from the right people.

Phase 1 — Executive Engagement (Week 1)

GOVERN cannot be built from the bottom up. If the executive team hasn't formally engaged with AI risk, everything your compliance team produces will sit in a folder rather than drive decisions. Phase 1 is about getting the right people in a room for a focused conversation — roughly 90 minutes — and walking out with two things: a formal statement of AI risk tolerance, and an executive sponsor who owns accountability for the governance program.

The risk tolerance statement doesn't need to be detailed at this stage. It needs to answer the question: what categories of AI-related risk are we unwilling to accept, and at what threshold do AI systems require executive-level approval before deployment? That statement becomes the anchor for every risk assessment your team runs afterward.

Frame this conversation in business terms, not compliance terms. Executives respond to questions about operational exposure, competitive positioning, and regulatory consequence — not to abstract discussions about framework alignment. "What happens to our FDA relationship if we deploy a diagnostic AI system without a documented governance process?" is a more productive opening than "GOVERN 1.1 requires a risk management strategy."

Phase 2 — Accountability Mapping (Weeks 2–4)

Once risk tolerance is defined, accountability structures need to follow. This means building a RACI matrix that covers AI-specific decisions: who is responsible for approving AI system deployments, who owns incident response, who monitors regulatory developments, who manages vendor AI relationships.

Most organizations discover during this phase that these accountabilities have been either scattered or unassigned. Legal thinks compliance owns it. Compliance thinks IT owns it. IT thinks the business units own it. The matrix forces a conversation that resolves the ambiguity.

This phase should also produce a governance committee — a standing body that includes representation from legal, compliance, IT, and the relevant business lines. It doesn't need to be large. It needs clear decision rights and a documented cadence. Naming a Chief AI Risk Officer or a designated governance lead at this stage gives the committee a functional center.

Phase 3 — Core Policy Creation (Weeks 5–10)

With accountability defined and executive commitment documented, the policy layer can be built. This is the most substantive phase in terms of document output, and I'll cover the specific documents in the next section. The goal at this stage is to produce a coherent policy system, not a stack of disconnected procedures.

A coherent policy system means the documents reference each other, the terminology is consistent, and a reader can trace from the executive commitment at the top down to the specific operational steps at the bottom. Organizations that produce policies in isolation — one team writes the AI use policy, another writes the risk procedure, a third writes the incident procedure — end up with a collection of documents that don't form a system. Auditors notice that immediately.

Phase 4 — Governance Cadence and Tool Deployment (Weeks 11–16)

Policies only work if they're activated. Phase 4 is about standing up the operational rhythm that keeps GOVERN alive: a monthly governance committee meeting, a quarterly program review against defined metrics, and an annual strategy refresh tied to the broader enterprise risk calendar. The AI system inventory goes live in this phase, intake procedures for new AI deployments go operational, and the training program launches for the first wave of employees.

Tool deployment at this stage usually means getting a system inventory into a manageable format — a shared spreadsheet works for smaller organizations, a GRC platform for larger ones — and establishing the workflow for how new AI systems get registered, classified, and reviewed before deployment.

Phase 5 — Ongoing Monitoring

What good looks like in a mature GOVERN implementation: policy compliance rates above 90%, AI incident response times within defined SLAs, training completion above 85% across required roles, and audit findings that are identified internally before external review rather than after. None of those numbers happen automatically. They require measurement points built into the operational cadence from the start.


The Documentation Compliance Teams Actually Need

I want to be specific here, because "develop AI governance policies" is advice that doesn't actually help anyone decide what to write. For regulated industries, seven documents form the practical core of a GOVERN-compliant program.

AI Governance Policy — The top-level executive commitment document. It states the organization's risk tolerance, defines the scope of the program (which AI systems, which business functions), and establishes approval authorities. This is what ties to GOVERN 1.1 and 1.2. Without it, everything else lacks a parent document.

AI Risk Assessment Procedure — The documented process for evaluating individual AI systems against the risk tolerance defined in the governance policy. It should specify assessment triggers (new deployments, significant changes, periodic review), the criteria used to classify risk, and the approval workflow for each risk tier. This maps directly to GOVERN 1.1's requirement to address legal and regulatory requirements in the risk management strategy.

AI System Inventory — A register of every AI system the organization uses, owns, or relies on — including vendor-provided AI embedded in other software platforms. GOVERN 1.6 requires organizational awareness of AI systems in use. In practice, most organizations are surprised by what they find when they build this for the first time. Shadow AI adoption in business units is nearly universal.

Third-Party AI Risk Procedure — The process for evaluating and monitoring vendor AI systems. GOVERN 6 specifically addresses third-party risk, including due diligence requirements, contractual provisions (audit rights, incident notification obligations, data handling), and ongoing monitoring. This is one of the most commonly skipped documents, and one of the first things sophisticated regulators ask about.

Trustworthy AI Integration Standard — The technical and operational criteria the organization applies to evaluate AI systems for trustworthiness before deployment. GOVERN 1.2 requires that policies integrate trustworthy AI principles — explainability, fairness, robustness, and accountability. This standard translates those principles into concrete evaluation criteria that your teams can actually apply during risk assessment.

Training and Competency Plan — GOVERN 2 addresses AI risk management workforce competencies. A functional training plan defines role-specific learning paths: what executives need to understand about AI risk, what developers need to know about responsible AI design, what compliance staff need to know about regulatory expectations, and what end users need to know before using an AI-assisted tool. One training for everyone is not a plan.

AI Incident Response Procedure — The document that defines what counts as an AI incident, who is notified and in what sequence, what the investigation process looks like, and when regulatory reporting is triggered. In regulated industries, this procedure needs to account for the possibility that an AI-related incident may be a reportable event — a medical device malfunction, a financial services adverse action, an adverse drug event — and the timeframes for those reports are not flexible.

These seven documents don't require months of work to produce at a first version. They require clarity about what your organization has decided — risk tolerance, accountability, policy scope — which is why the executive engagement and accountability mapping phases have to come first. The documents capture decisions. They can't be written before the decisions are made.


How Regulators Are Using NIST AI RMF

The shift that matters most for regulated industries is this: NIST AI RMF is moving from "best practice" to "compliance baseline" faster than most organizations are tracking. If you're waiting to see how this settles before investing in GOVERN, you're probably already behind.

The FDA has increasingly referenced NIST AI RMF in its guidance on AI and machine learning in medical devices, including the 2023 action plan for AI/ML-based software as a medical device. GOVERN 1.1's requirement to document legal and regulatory requirements in the risk management strategy maps directly to the predicate rule documentation expectations the FDA applies during review. An organization that has built GOVERN properly is substantially better positioned for premarket submissions involving AI-assisted decision making.

The CFPB issued guidance expecting NIST AI RMF alignment for financial AI systems, particularly in the context of adverse action decisions and fair lending requirements. FTC Section 5 enforcement actions have begun citing AI governance expectations consistent with NIST AI RMF language — the FTC's focus on algorithmic accountability and transparency tracks closely with GOVERN's accountability and policy requirements.

Federal Executive Order 14110 from October 2023 directed federal agencies to require NIST AI RMF alignment in AI procurement, which means that any organization selling AI-enabled products or services to the federal government already faces practical NIST AI RMF requirements as a condition of doing business.

For organizations building toward ISO 42001 certification or EU AI Act compliance, the overlap with NIST AI RMF GOVERN is substantial — approximately 70% by my reading of the two frameworks side by side. The organizational context, risk management strategy, and accountability requirements in GOVERN align closely with ISO 42001 Clauses 4, 5, and 6, and with the EU AI Act's governance obligations for providers and deployers of high-risk AI systems. Organizations that build GOVERN properly are not starting over when they pursue ISO 42001. They're picking up where they left off.


Where GOVERN Implementation Breaks Down

In my view, the failure modes in GOVERN implementation are more predictable than people expect. They're not technical failures. They're organizational ones.

Governance theater is the most common. Policies get written, a committee gets formed, and then nothing changes. The policies sit in a SharePoint folder. The committee meets once and doesn't meet again. No one enforces the AI system inventory requirement. Governance theater is worse than no governance because it creates the appearance of a program without the substance, and that appearance is exactly what gets tested during an audit or an incident.

Ambiguity about what "done" looks like stalls a lot of implementations. Teams don't know what GOVERN completion looks like, so they keep writing more documents rather than activating the ones they have. The answer is: GOVERN is operational when your risk tolerance is documented and approved, your accountability structures are functioning, your policy system covers the seven document categories, and your governance cadence is running. That's done enough to start using it. Refinement happens from there.

The silo problem is persistent. AI teams see governance as friction that slows deployment. Legal and compliance teams lack the technical context to push back meaningfully on AI decisions. The result is that governance gets applied after deployment rather than before — or not at all. GOVERN doesn't resolve this tension automatically, but it does create the structures — committee, RACI, policy — that force the conversation to happen at the right time.

Third-party blind spots are underestimated almost universally. Organizations govern the AI systems they build but not the AI systems they buy or embed. Vendor AI is often far more pervasive than IT leadership realizes, and GOVERN 6 requires organizations to address it explicitly. The question to ask is: if every AI feature in every vendor platform you use were documented and assessed, how many would you find? Most organizations that answer that question honestly find they have a much larger third-party AI footprint than their program accounts for.

Generative AI caught governance off guard. Most organizations running GenAI today are doing so under policies written for predictive models. The risk profile is different. The failure modes are different. The transparency requirements are different. GOVERN programs built before 2023 almost certainly need to be revisited for GenAI coverage, and NIST AI 600-1 is where that revision starts.

The blunt version: most AI governance programs have the documents but not the decisions. They look like GOVERN from a distance and collapse under pressure.


Where to Start

The risk tolerance conversation is the right first step, and it's the one most organizations skip in favor of something that feels more productive. Writing a policy before you've defined your risk tolerance is writing without an anchor. The document will be vague because it has to be — there's nothing concrete to anchor it to. Get the executive team to agree on what categories of AI risk the organization will and won't accept, and get that in writing. Everything else follows from that.

Before writing a single policy, build your AI system inventory. I've seen organizations spend months drafting governance documents only to discover that their actual AI footprint is three times what they thought — largely because of embedded vendor AI. Your policies have to cover what you actually have, not what you think you have. The inventory precedes the policy, not the other way around.

When you bring this to executives, frame it around business exposure, not compliance obligation. The question that gets attention is: "What happens to our regulatory standing — with the FDA, with the CFPB, with the FTC — if an AI system we use causes harm and we can't demonstrate we had a governance program?" That's a concrete business question with a concrete answer. It tends to move faster than "we need to align with NIST AI RMF."

If your organization is trying to build this program and finding that the organizational complexity is larger than the compliance team can navigate alone — the internal politics, the technical gaps, the regulatory translation — that's the situation where an external advisory engagement tends to pay for itself quickly. The GOVERN function is not technically difficult. It's organizationally difficult. Sometimes the most useful thing an outside perspective can do is name what's stuck and give the team permission to move past it.

The door is open if you want to talk through where your program stands and what it would take to get GOVERN operational in your organization.

J

Jared Clark

JD · RAC · ISO/GMP Expert · AI Governance Advisor

Jared Clark advises regulated enterprises on AI governance, risk management, and compliance program design at Regulated AI Consulting. His background spans quality management systems, regulatory affairs credentialing, and legal practice — a combination that lets him translate between the technical, legal, and operational dimensions of AI governance without losing any of them.

Ready to Build GOVERN the Right Way?

Most AI governance programs have the documents but not the decisions. Let's talk about what it would take to get your program operational — starting with the risk tolerance conversation that makes everything else work.

Schedule a Consultation

Related Insights