NIST-AI-RMF
NIST AI Risk Management Framework 1.0 (AI 100-1)
- I. Foundational Information
- Art. FR-1. Understanding and Addressing Risks, Impacts, and Harms (3)
- Art. TR-1. Valid and Reliable (4)
- Art. TR-2. Safe (5)
- Art. TR-3. Secure and Resilient (3)
- Art. TR-4. Accountable and Transparent (3)
- Art. TR-5. Explainable and Interpretable (3)
- Art. TR-6. Privacy-Enhanced ref
- Art. TR-7. Fair — with Harmful Bias Managed ref
- II. AI RMF Core Framework
- Ch. 1 — GOVERN
- Art. GV-1. Policies, Processes, Procedures, and Practices (8)
- Art. GV-2. Accountability Structures (3)
- Art. GV-3. Workforce Diversity, Equity, Inclusion, and Accessibility (2)
- Art. GV-4. Organizational Culture of AI Risk (6)
- Art. GV-5. Engagement with Relevant AI Actors (3)
- Art. GV-6. Third-Party AI Risks and Supply Chain (3)
- Ch. 2 — MAP
- Art. MP-1. Context is Established and Understood (8)
- Art. MP-2. Categorization of the AI System (6)
- Art. MP-3. AI Capabilities, Usage, Goals, Benefits, and Costs (5)
- Art. MP-4. Third-Party Component Risks and Benefits (5)
- Art. MP-5. Impact Characterization (4)
- Ch. 3 — MEASURE
- Art. MS-1. Appropriate Methods and Metrics (11)
- Art. MS-2. Trustworthy Characteristics Evaluation (24)
- Art. MS-3. Risk Tracking Mechanisms (5)
- Art. MS-4. Measurement Efficacy Feedback (6)
- Ch. 4 — MANAGE
- Art. MG-1. Risk Prioritization and Response (4)
- Art. MG-2. Strategies for Benefits and Impact Management (6)
- Art. MG-3. Third-Party AI Risk Management (2)
- Art. MG-4. Risk Treatment and Communication Plans (5)
- Annex A. NIST AI RMF Subcategory Reference
Title I — Foundational Information
Title II — AI RMF Core Framework
Chapter 1 — GOVERN
Chapter 2 — MAP
Chapter 3 — MEASURE
Article MS-1. Appropriate Methods and Metrics
9 obligations
NIST-RMF-MS-1-03
Monitoring
Regularly assess appropriateness of AI metrics and effectiveness of controls
Organizations must conduct regular assessments of whether their AI metrics remain appropriate and whether existing contr
NIST-RMF-MS-1-04
Monitoring
Include error reports in regular assessments of metrics and controls
Organizations must incorporate reports of errors into their regular assessments of AI metrics appropriateness and contro
NIST-RMF-MS-1-05
Risk Management
Consider potential impacts on affected communities in assessments
Organizations must include consideration of potential impacts on affected communities when regularly assessing the appro
NIST-RMF-MS-1-06
Human Oversight
Involve internal experts who were not front-line developers in assessments
Organizations must involve internal experts who did not serve as front-line developers of the AI system in regular asses
NIST-RMF-MS-1-07
Human Oversight
Involve independent assessors in regular assessments and updates
Organizations must involve independent assessors (external to the organization) in regular assessments and updates of AI
NIST-RMF-MS-1-08
Human Oversight
Consult domain experts in assessments as necessary per risk tolerance
Organizations must consult domain experts to support assessments when necessary based on their organizational risk toler
NIST-RMF-MS-1-09
Human Oversight
Consult users in assessments as necessary per organizational risk tolerance
Organizations must consult users of the AI system to support assessments when necessary based on their organizational ri
NIST-RMF-MS-1-10
Human Oversight
Consult external AI actors in assessments as necessary per risk tolerance
Organizations must consult AI actors external to the team that developed or deployed the AI system to support assessment
NIST-RMF-MS-1-11
Human Oversight
Consult affected communities in assessments as necessary per risk tolerance
Organizations must consult affected communities to support assessments when necessary based on their organizational risk
Article MS-2. Trustworthy Characteristics Evaluation
16 obligations
NIST-RMF-MS-2-01
Documentation
Document TEVV Test Sets, Metrics, and Tools
Organizations must document test sets, metrics, and details about the tools used during Testing, Evaluation, Validation,
NIST-RMF-MS-2-02
Requirement
Ensure Human Subject Evaluations Meet Protection Requirements
When conducting evaluations involving human subjects, organizations must ensure these evaluations meet applicable human
NIST-RMF-MS-2-03
Requirement
Measure and Demonstrate AI System Performance Criteria
Organizations must measure AI system performance or assurance criteria qualitatively or quantitatively and demonstrate t
NIST-RMF-MS-2-04
Documentation
Document Performance and Assurance Measures
Organizations must document the measures used to evaluate AI system performance or assurance criteria.
NIST-RMF-MS-2-05
Monitoring
Monitor AI System Functionality and Behavior in Production
Organizations must monitor the functionality and behavior of the AI system and its components when in production, as ide
NIST-RMF-MS-2-06
Requirement
Demonstrate AI System Validity and Reliability
Organizations must demonstrate that the AI system to be deployed is valid and reliable.
NIST-RMF-MS-2-07
Documentation
Document Generalizability Limitations
Organizations must document limitations of the generalizability of the AI system beyond the conditions under which the t
NIST-RMF-MS-2-08
Risk Management
Regularly Evaluate AI System for Safety Risks
Organizations must evaluate the AI system regularly for safety risks as identified in the MAP function.
NIST-RMF-MS-2-09
Requirement
Demonstrate AI System Safety and Risk Tolerance
Organizations must demonstrate that the AI system to be deployed is safe, its residual negative risk does not exceed the
NIST-RMF-MS-2-10
Requirement
Implement Safety Metrics for System Reliability and Monitoring
Organizations must ensure safety metrics reflect system reliability and robustness, real-time monitoring, and response t
NIST-RMF-MS-2-11
Requirement
Evaluate AI System Security and Resilience
Organizations must evaluate AI system security and resilience as identified in the MAP function.
NIST-RMF-MS-2-12
Documentation
Document Security and Resilience Evaluation Results
Organizations must document the results of AI system security and resilience evaluations.
NIST-RMF-MS-2-13
Risk Management
Examine Transparency and Accountability Risks
Organizations must examine risks associated with transparency and accountability as identified in the MAP function.
NIST-RMF-MS-2-14
Documentation
Document Transparency and Accountability Risk Analysis
Organizations must document the examination of risks associated with transparency and accountability.
NIST-RMF-MS-2-15
Transparency
Explain, Validate, and Document AI Model
Organizations must explain, validate, and document the AI model as identified in the MAP function to inform responsible
NIST-RMF-MS-2-16
Transparency
Interpret AI System Output Within Context
Organizations must interpret AI system output within its context as identified in the MAP function to inform responsible