NIST AI RMF: 137 Obligations, Mapped and Searchable
NIST AI RMF is a voluntary framework. It says so in the preface. “Voluntary, rights-preserving, non-sector-specific.” No penalties. No enforcement mechanism. No regulator knocking on your door because you didn’t implement Govern 1.1.
That was true in January 2024 when NIST published the framework. It stopped being true when Colorado passed SB 24-205.
Colorado’s AI Act – effective June 30, 2026 – creates a safe harbor for organizations that follow “nationally or internationally recognized risk management frameworks for artificial intelligence systems.” The only framework that matches that description with any specificity: NIST AI RMF.
A state AG investigating an AI discrimination complaint can now ask: “Did you follow NIST AI RMF?” If yes, you get the safe harbor. If no, you’re defending your risk management approach from scratch. NIST’s 137 voluntary recommendations just became the baseline against which your compliance program is measured.
And Colorado isn’t alone. The pattern is spreading across states. Federal procurement already references NIST AI RMF. The framework that was optional in 2024 is becoming the de facto US AI governance standard – not through federal mandate, but through state-level adoption that makes “voluntary” a distinction without a difference.
The Four Functions
NIST AI RMF organizes AI risk management into four functions. This isn’t arbitrary taxonomy – it’s a lifecycle. Govern establishes the foundation. Map identifies where risk lives. Measure evaluates how significant the risks are. Manage addresses them.
Govern: The Foundation (42 obligations)
Govern is the largest function and the one most organizations skip. It covers the organizational structures, policies, and cultural practices that make AI risk management possible – before you evaluate a single AI system.
Govern 1 defines the legal and regulatory landscape your organization operates in. What regulations apply? What standards are relevant? What contractual obligations exist? This isn’t an AI-specific question – it’s a business context question that informs everything downstream.
Govern 2 addresses accountability. Who is responsible for AI risk? Not “the AI team” – specific roles with specific decision authority. Govern 2.1 requires that “roles, responsibilities, and lines of communication related to mapping, measuring, and managing AI risks are documented and are clear to individuals and teams throughout the organization.”
Govern 3 covers workforce diversity and AI literacy. Govern 4 addresses organizational culture – whether the environment supports risk identification or suppresses it. Govern 5 and 6 address policies, procedures, and the mechanisms for enacting them.
42 obligations. None of them touch a model. All of them determine whether the model-level risk management you do in Map, Measure, and Manage actually functions within the organization.
Map: Identifying the Risk Surface (35 obligations)
Map is where AI meets context. Map 1 requires understanding the intended purpose and known limitations of the AI system – not in the abstract, but for the specific deployment. Map 2 covers data quality and representativeness. Map 3 addresses benefits and costs to affected communities. Map 5 covers the system’s potential impacts on individuals, groups, and society.
The Map function is where most organizations first encounter difficulty. “Describe the AI system’s potential impacts on civil liberties” is a requirement that doesn’t have a template answer. It requires thinking about the specific system in its specific deployment context with its specific user population.
35 obligations. Each one requires system-specific analysis, not policy-level documentation.
Measure: Quantifying the Risks (31 obligations)
Measure is the testing function. Measure 1 establishes metrics and methodologies for evaluating AI risk. Measure 2 covers the actual evaluation – testing AI systems for performance, fairness, bias, reliability, and robustness. Measure 3 addresses tracking and monitoring mechanisms. Measure 4 requires gathering feedback from affected communities.
Measure 2 is where the technical work lives. Measure 2.6 alone – the requirement to conduct regular evaluations including bias testing – generates significant assessment work for any system that makes consequential decisions. Measure 2.11 requires that AI systems are evaluated for “fairness and bias” using “a variety of appropriate quantitative and qualitative techniques.”
31 obligations. Each one implies specific testing artifacts, documented results, and ongoing monitoring.
Manage: Addressing What You Found (29 obligations)
Manage is the response function. Manage 1 covers risk prioritization and response. Manage 2 addresses risk treatment strategies. Manage 3 covers incident response for AI-related events. Manage 4 addresses the documentation and communication of risk management decisions.
This function closes the loop. You identified the risk (Map). You measured its significance (Measure). Now you address it (Manage) – with documented decisions, implemented controls, and a plan for what happens when things go wrong.
29 obligations. Each one requires evidence of action, not just evidence of awareness.
Why “137” Is the Right Number
Different organizations count NIST AI RMF obligations differently. Some count the four functions. Some count the 19 categories. Some count the subcategories. Some count the suggested actions within the subcategories.
We count at the level of enforceable specificity – the level at which an auditor would evaluate compliance. “Govern 1: Policies, processes, procedures, and practices across the organization related to the mapping, measuring, and managing of AI risks are in place, transparent, and implemented effectively” is one category-level statement. But it contains multiple discrete requirements: policies must exist, they must be transparent, and they must be implemented effectively. Three separate things to verify. Three separate evidence artifacts.
137 obligations at this granularity. Not the 4 functions. Not the 19 categories. The actual requirements an auditor would check.
This is the same decomposition methodology we applied to the EU AI Act’s 334 obligations and ISO 42001’s 503 obligations. The count matters because it determines the scope of the assessment. A consultant who scopes a NIST AI RMF engagement at “four functions” will underestimate by an order of magnitude. A consultant who scopes it at 137 obligations will estimate accurately.
The Cross-Regulation Bridge
NIST AI RMF doesn’t exist in isolation. Organizations subject to NIST are almost always subject to other AI frameworks simultaneously. The cross-regulation mapping becomes critical here.
Article 9 of the EU AI Act requires risk management systems for high-risk AI. NIST Govern 1 requires risk management policies. Colorado requires risk management programs. These aren’t identical obligations – they differ in scope, specificity, and enforcement context – but they address the same underlying control.
When we map cross-regulation overlaps, the NIST-to-EU AI Act relationship is one of the densest. Risk management, documentation, monitoring, and human oversight overlap substantially between the two frameworks. A consultant who assesses NIST AI RMF and then assesses EU AI Act for the same client can leverage the cross-references to avoid duplicating work.
The regulation browser in ReguLume shows these cross-references at the obligation level. Open any NIST obligation and see which EU AI Act, Colorado, or ISO 42001 obligations address the same requirement. The relationship type tells you whether meeting one satisfies the other – or whether both need independent evidence.
Browsing 137 Obligations
Every NIST AI RMF obligation is searchable in the regulation browser. Filter by function (Govern, Map, Measure, Manage), by obligation type (requirement, documentation, risk management, monitoring), or search semantically by describing what you’re looking for.
Each obligation shows: the requirement text, the function and category it belongs to, the obligation type classification, applicability criteria, and evidence types – the kinds of documentation an auditor would expect. Cross-references to related obligations in other regulations appear alongside.
The browser isn’t a reading tool. It’s a scoping tool. A consultant evaluating a client’s AI hiring tool against NIST AI RMF can filter to Map 2 (data quality) and Map 3 (community impact) to see exactly which obligations apply to a system that makes employment decisions – then assess those obligations against the client’s specific system inventory.
From the browser to the mapping engine. From the mapping to the gap analysis. From the gap analysis to the evaluation plan. From the evaluation to the governance score. One pipeline. 137 obligations.
Voluntary Is a Technicality
NIST AI RMF remains voluntary in the narrow legal sense. No federal statute requires compliance. No federal agency enforces it.
But Colorado’s safe harbor references it. Federal procurement prefers it. ISO 42001 aligns with it. The EU AI Act’s risk management requirements parallel it. State after state is incorporating “nationally recognized risk management frameworks” into their AI legislation – language that points directly at NIST.
For compliance consultants, the practical question isn’t whether NIST AI RMF is mandatory. It’s whether your client can afford to skip the one framework that every other framework references.
137 obligations. Four functions. Zero ambiguity about what compliance looks like when you decompose it to the level an auditor checks.
NIST AI RMF obligations are mapped from the NIST AI Risk Management Framework 1.0 (January 2023) and the NIST AI RMF Playbook. Obligation counts reflect decomposition to enforceable-requirement granularity. Cross-regulation references link NIST obligations to EU AI Act, Colorado AI Act, ISO 42001, and other mapped frameworks. Browse all 137 obligations in the ReguLume regulation browser.
Map obligations to your AI systems
ReguLume covers 2,964 obligations across 15 regulations. Score your compliance posture in hours, not months.
Get Started