NIST-AI-RMF
NIST AI Risk Management Framework 1.0 (AI 100-1)
- I. Foundational Information
- Art. FR-1. Understanding and Addressing Risks, Impacts, and Harms (3)
- Art. TR-1. Valid and Reliable (4)
- Art. TR-2. Safe (5)
- Art. TR-3. Secure and Resilient (3)
- Art. TR-4. Accountable and Transparent (3)
- Art. TR-5. Explainable and Interpretable (3)
- Art. TR-6. Privacy-Enhanced ref
- Art. TR-7. Fair — with Harmful Bias Managed ref
- II. AI RMF Core Framework
- Ch. 1 — GOVERN
- Art. GV-1. Policies, Processes, Procedures, and Practices (8)
- Art. GV-2. Accountability Structures (3)
- Art. GV-3. Workforce Diversity, Equity, Inclusion, and Accessibility (2)
- Art. GV-4. Organizational Culture of AI Risk (6)
- Art. GV-5. Engagement with Relevant AI Actors (3)
- Art. GV-6. Third-Party AI Risks and Supply Chain (3)
- Ch. 2 — MAP
- Art. MP-1. Context is Established and Understood (8)
- Art. MP-2. Categorization of the AI System (6)
- Art. MP-3. AI Capabilities, Usage, Goals, Benefits, and Costs (5)
- Art. MP-4. Third-Party Component Risks and Benefits (5)
- Art. MP-5. Impact Characterization (4)
- Ch. 3 — MEASURE
- Art. MS-1. Appropriate Methods and Metrics (11)
- Art. MS-2. Trustworthy Characteristics Evaluation (24)
- Art. MS-3. Risk Tracking Mechanisms (5)
- Art. MS-4. Measurement Efficacy Feedback (6)
- Ch. 4 — MANAGE
- Art. MG-1. Risk Prioritization and Response (4)
- Art. MG-2. Strategies for Benefits and Impact Management (6)
- Art. MG-3. Third-Party AI Risk Management (2)
- Art. MG-4. Risk Treatment and Communication Plans (5)
- Annex A. NIST AI RMF Subcategory Reference
Title I — Foundational Information
Article TR-2. Safe
3 obligations
NIST-RMF-TR-2-03
Transparency
Provide clear information on responsible use to deployers
Providers must provide clear information to deployers regarding the responsible use of the AI system
NIST-RMF-TR-2-04
Requirement
Responsible decision-making by deployers and operators
Deployers and operators must engage in responsible decision-making practices when using AI systems
NIST-RMF-TR-2-05
Documentation
Explanation and documentation of risks based on empirical evidence
Provide explanation and documentation of risks supported by empirical evidence of incidents to improve safe operation
Article TR-3. Secure and Resilient
3 obligations
NIST-RMF-TR-3-01
Requirement
Ensure AI System Resilience Against Adverse Events
AI systems must be designed and implemented to withstand unexpected adverse events or unexpected changes in their enviro
NIST-RMF-TR-3-02
Requirement
Implement Safe Degradation Mechanisms
AI systems must be designed to degrade safely and gracefully when maintaining full functionality is not possible due to
NIST-RMF-TR-3-03
Requirement
Ensure Deployment Ecosystem Resilience
The ecosystems in which AI systems are deployed must be resilient and able to withstand unexpected adverse events or cha
Article TR-4. Accountable and Transparent
3 obligations
NIST-RMF-TR-4-01
Transparency
Provide meaningful transparency about AI system and outputs
Must provide access to appropriate levels of information about the AI system and its outputs to individuals interacting
NIST-RMF-TR-4-02
Transparency
Ensure transparency regardless of user awareness of AI interaction
Must make information about the AI system and its outputs available to individuals interacting with the system even when
NIST-RMF-TR-4-03
Requirement
Establish transparency as prerequisite for other trustworthy AI characteristics
Must implement transparency measures as a foundational requirement that enables and supports the achievement of other tr
Article TR-5. Explainable and Interpretable
3 obligations
NIST-RMF-TR-5-01
Transparency
Provide Explainable AI Systems
AI system operators and overseers must ensure their systems provide explainability - a representation of the mechanisms
NIST-RMF-TR-5-02
Transparency
Provide Interpretable AI System Outputs
AI system operators and overseers must ensure their systems provide interpretability - meaning of the AI system's output
NIST-RMF-TR-5-03
Transparency
Enable User Understanding of AI System Functionality
AI system providers must ensure users can gain deeper insights into the functionality and trustworthiness of the system,