NIST-AI-RMF
NIST AI Risk Management Framework 1.0 (AI 100-1)
- I. Foundational Information
- Art. FR-1. Understanding and Addressing Risks, Impacts, and Harms (3)
- Art. TR-1. Valid and Reliable (4)
- Art. TR-2. Safe (5)
- Art. TR-3. Secure and Resilient (3)
- Art. TR-4. Accountable and Transparent (3)
- Art. TR-5. Explainable and Interpretable (3)
- Art. TR-6. Privacy-Enhanced ref
- Art. TR-7. Fair — with Harmful Bias Managed ref
- II. AI RMF Core Framework
- Ch. 1 — GOVERN
- Art. GV-1. Policies, Processes, Procedures, and Practices (8)
- Art. GV-2. Accountability Structures (3)
- Art. GV-3. Workforce Diversity, Equity, Inclusion, and Accessibility (2)
- Art. GV-4. Organizational Culture of AI Risk (6)
- Art. GV-5. Engagement with Relevant AI Actors (3)
- Art. GV-6. Third-Party AI Risks and Supply Chain (3)
- Ch. 2 — MAP
- Art. MP-1. Context is Established and Understood (8)
- Art. MP-2. Categorization of the AI System (6)
- Art. MP-3. AI Capabilities, Usage, Goals, Benefits, and Costs (5)
- Art. MP-4. Third-Party Component Risks and Benefits (5)
- Art. MP-5. Impact Characterization (4)
- Ch. 3 — MEASURE
- Art. MS-1. Appropriate Methods and Metrics (11)
- Art. MS-2. Trustworthy Characteristics Evaluation (24)
- Art. MS-3. Risk Tracking Mechanisms (5)
- Art. MS-4. Measurement Efficacy Feedback (6)
- Ch. 4 — MANAGE
- Art. MG-1. Risk Prioritization and Response (4)
- Art. MG-2. Strategies for Benefits and Impact Management (6)
- Art. MG-3. Third-Party AI Risk Management (2)
- Art. MG-4. Risk Treatment and Communication Plans (5)
- Annex A. NIST AI RMF Subcategory Reference
Requirement Obligations
39Title I — Foundational Information
Article TR-1. Valid and Reliable
3 obligations
NIST-RMF-TR-1-01
Requirement
Ensure AI System Validation
AI system providers must ensure their systems are valid by confirming through objective evidence that requirements for t
NIST-RMF-TR-1-02
Requirement
Ensure AI System Reliability
AI system providers must ensure their systems are reliable by implementing measures to ensure the system performs as req
NIST-RMF-TR-1-04
Requirement
Ensure AI System Correctness and Precision
AI system providers must ensure their systems are sufficiently correct, precise, or exact for their intended purpose as
Article TR-2. Safe
3 obligations
NIST-RMF-TR-2-01
Requirement
Safe operation under defined conditions
AI systems must be designed, developed, and deployed to not lead to endangerment of human life, health, property, or env
NIST-RMF-TR-2-02
Requirement
Responsible design, development, and deployment practices
Implement responsible practices throughout the design, development, and deployment phases to improve safe operation of A
NIST-RMF-TR-2-04
Requirement
Responsible decision-making by deployers and operators
Deployers and operators must engage in responsible decision-making practices when using AI systems
Article TR-3. Secure and Resilient
3 obligations
NIST-RMF-TR-3-01
Requirement
Ensure AI System Resilience Against Adverse Events
AI systems must be designed and implemented to withstand unexpected adverse events or unexpected changes in their enviro
NIST-RMF-TR-3-02
Requirement
Implement Safe Degradation Mechanisms
AI systems must be designed to degrade safely and gracefully when maintaining full functionality is not possible due to
NIST-RMF-TR-3-03
Requirement
Ensure Deployment Ecosystem Resilience
The ecosystems in which AI systems are deployed must be resilient and able to withstand unexpected adverse events or cha
Article TR-4. Accountable and Transparent
1 obligation
Title II — AI RMF Core Framework
Chapter 1 — GOVERN
Article GV-1. Policies, Processes, Procedures, and Practices
2 obligations
NIST-RMF-GV-1-02
Requirement
Integrate trustworthy AI characteristics into organizational governance
Organizations must integrate the characteristics of trustworthy AI into their organizational policies, processes, proced
NIST-RMF-GV-1-07
Requirement
Establish safe AI system decommissioning processes
Organizations must establish processes and procedures for decommissioning and phasing out AI systems safely in a manner
Article GV-2. Accountability Structures
2 obligations
NIST-RMF-GV-2-02
Requirement
Provide AI risk management training to personnel and partners
Organizations must provide AI risk management training to their personnel and partners to enable them to perform their d
NIST-RMF-GV-2-03
Requirement
Executive leadership responsibility for AI system risk decisions
Executive leadership of organizations must take responsibility for decisions about risks associated with AI system devel
Article GV-4. Organizational Culture of AI Risk
1 obligation
Article GV-5. Engagement with Relevant AI Actors
3 obligations
NIST-RMF-GV-5-01
Requirement
Establish policies for external feedback collection on AI risks
Organizations must establish and maintain organizational policies and practices to collect, consider, prioritize, and in
NIST-RMF-GV-5-02
Requirement
Establish mechanisms for regular feedback incorporation into AI systems
Organizations must establish mechanisms that enable development and deployment teams to regularly incorporate adjudicate
NIST-RMF-GV-5-03
Requirement
Maintain robust engagement processes with relevant AI actors
Organizations must establish and maintain processes that ensure robust engagement with relevant AI actors as part of the
Chapter 2 — MAP
Article MP-1. Context is Established and Understood
5 obligations
NIST-RMF-MP-1-02
Requirement
Ensure interdisciplinary team diversity and document participation
Organizations must ensure that interdisciplinary AI actors with competencies, skills, and capacities for establishing co
NIST-RMF-MP-1-03
Requirement
Prioritize interdisciplinary collaboration opportunities
Organizations must prioritize opportunities for interdisciplinary collaboration in AI system development and deployment.
NIST-RMF-MP-1-05
Requirement
Define or re-evaluate business value context
Organizations must clearly define the business value or context of business use for new AI systems, or re-evaluate this
NIST-RMF-MP-1-07
Requirement
Elicit and understand system requirements from relevant AI actors
Organizations must elicit system requirements from and ensure they are understood by relevant AI actors, including requi
NIST-RMF-MP-1-08
Requirement
Consider socio-technical implications in design decisions for AI risk mitigation
Organizations must ensure that design decisions take socio-technical implications into account to address AI risks.
Article MP-2. Categorization of the AI System
1 obligation
Article MP-3. AI Capabilities, Usage, Goals, Benefits, and Costs
1 obligation
Article MP-5. Impact Characterization
2 obligations
NIST-RMF-MP-5-02
Requirement
Establish practices for regular engagement with AI actors
Organizations must establish and maintain practices for supporting regular engagement with relevant AI actors to integra
NIST-RMF-MP-5-03
Requirement
Assign personnel for AI actor engagement and feedback integration
Organizations must designate personnel responsible for supporting regular engagement with relevant AI actors and integra
Chapter 3 — MEASURE
Article MS-2. Trustworthy Characteristics Evaluation
9 obligations
NIST-RMF-MS-2-02
Requirement
Ensure Human Subject Evaluations Meet Protection Requirements
When conducting evaluations involving human subjects, organizations must ensure these evaluations meet applicable human
NIST-RMF-MS-2-03
Requirement
Measure and Demonstrate AI System Performance Criteria
Organizations must measure AI system performance or assurance criteria qualitatively or quantitatively and demonstrate t
NIST-RMF-MS-2-06
Requirement
Demonstrate AI System Validity and Reliability
Organizations must demonstrate that the AI system to be deployed is valid and reliable.
NIST-RMF-MS-2-09
Requirement
Demonstrate AI System Safety and Risk Tolerance
Organizations must demonstrate that the AI system to be deployed is safe, its residual negative risk does not exceed the
NIST-RMF-MS-2-10
Requirement
Implement Safety Metrics for System Reliability and Monitoring
Organizations must ensure safety metrics reflect system reliability and robustness, real-time monitoring, and response t
NIST-RMF-MS-2-11
Requirement
Evaluate AI System Security and Resilience
Organizations must evaluate AI system security and resilience as identified in the MAP function.
NIST-RMF-MS-2-19
Requirement
Evaluate Fairness and Bias
Organizations must evaluate fairness and bias as identified in the MAP function.
NIST-RMF-MS-2-21
Requirement
Assess Environmental Impact and Sustainability
Organizations must assess environmental impact and sustainability of AI model training and management activities as iden
NIST-RMF-MS-2-23
Requirement
Evaluate TEVV Metrics and Processes Effectiveness
Organizations must evaluate the effectiveness of the employed TEVV metrics and processes in the MEASURE function.
Chapter 4 — MANAGE
Article MG-2. Strategies for Benefits and Impact Management
2 obligations
NIST-RMF-MG-2-03
Requirement
Implement Mechanisms to Sustain AI System Value
Organizations must establish and apply mechanisms to sustain the value of deployed AI systems throughout their operation
NIST-RMF-MG-2-06
Requirement
Assign and Communicate Override Mechanism Responsibilities
Organizations must assign specific responsibilities for AI system override, disengagement, and deactivation functions an
Article MG-4. Risk Treatment and Communication Plans
1 obligation