NIST-AI-RMF
NIST AI Risk Management Framework 1.0 (AI 100-1)
- I. Foundational Information
- Art. FR-1. Understanding and Addressing Risks, Impacts, and Harms (3)
- Art. TR-1. Valid and Reliable (4)
- Art. TR-2. Safe (5)
- Art. TR-3. Secure and Resilient (3)
- Art. TR-4. Accountable and Transparent (3)
- Art. TR-5. Explainable and Interpretable (3)
- Art. TR-6. Privacy-Enhanced ref
- Art. TR-7. Fair — with Harmful Bias Managed ref
- II. AI RMF Core Framework
- Ch. 1 — GOVERN
- Art. GV-1. Policies, Processes, Procedures, and Practices (8)
- Art. GV-2. Accountability Structures (3)
- Art. GV-3. Workforce Diversity, Equity, Inclusion, and Accessibility (2)
- Art. GV-4. Organizational Culture of AI Risk (6)
- Art. GV-5. Engagement with Relevant AI Actors (3)
- Art. GV-6. Third-Party AI Risks and Supply Chain (3)
- Ch. 2 — MAP
- Art. MP-1. Context is Established and Understood (8)
- Art. MP-2. Categorization of the AI System (6)
- Art. MP-3. AI Capabilities, Usage, Goals, Benefits, and Costs (5)
- Art. MP-4. Third-Party Component Risks and Benefits (5)
- Art. MP-5. Impact Characterization (4)
- Ch. 3 — MEASURE
- Art. MS-1. Appropriate Methods and Metrics (11)
- Art. MS-2. Trustworthy Characteristics Evaluation (24)
- Art. MS-3. Risk Tracking Mechanisms (5)
- Art. MS-4. Measurement Efficacy Feedback (6)
- Ch. 4 — MANAGE
- Art. MG-1. Risk Prioritization and Response (4)
- Art. MG-2. Strategies for Benefits and Impact Management (6)
- Art. MG-3. Third-Party AI Risk Management (2)
- Art. MG-4. Risk Treatment and Communication Plans (5)
- Annex A. NIST AI RMF Subcategory Reference
Transparency Obligations
14Title I — Foundational Information
Article TR-2. Safe
1 obligation
Article TR-4. Accountable and Transparent
2 obligations
NIST-RMF-TR-4-01
Transparency
Provide meaningful transparency about AI system and outputs
Must provide access to appropriate levels of information about the AI system and its outputs to individuals interacting
NIST-RMF-TR-4-02
Transparency
Ensure transparency regardless of user awareness of AI interaction
Must make information about the AI system and its outputs available to individuals interacting with the system even when
Article TR-5. Explainable and Interpretable
3 obligations
NIST-RMF-TR-5-01
Transparency
Provide Explainable AI Systems
AI system operators and overseers must ensure their systems provide explainability - a representation of the mechanisms
NIST-RMF-TR-5-02
Transparency
Provide Interpretable AI System Outputs
AI system operators and overseers must ensure their systems provide interpretability - meaning of the AI system's output
NIST-RMF-TR-5-03
Transparency
Enable User Understanding of AI System Functionality
AI system providers must ensure users can gain deeper insights into the functionality and trustworthiness of the system,
Title II — AI RMF Core Framework
Chapter 1 — GOVERN
Article GV-1. Policies, Processes, Procedures, and Practices
1 obligation
Article GV-4. Organizational Culture of AI Risk
2 obligations
NIST-RMF-GV-4-03
Transparency
Communicate AI Impacts Broadly
Organizational teams must communicate about the impacts of AI technology more broadly beyond internal documentation, ens
NIST-RMF-GV-4-06
Transparency
Establish Information Sharing Practices
Organizations must establish organizational practices that enable information sharing related to AI systems, facilitatin
Chapter 2 — MAP
Article MP-2. Categorization of the AI System
1 obligation
Chapter 3 — MEASURE
Article MS-2. Trustworthy Characteristics Evaluation
2 obligations
NIST-RMF-MS-2-15
Transparency
Explain, Validate, and Document AI Model
Organizations must explain, validate, and document the AI model as identified in the MAP function to inform responsible
NIST-RMF-MS-2-16
Transparency
Interpret AI System Output Within Context
Organizations must interpret AI system output within its context as identified in the MAP function to inform responsible
Article MS-3. Risk Tracking Mechanisms
1 obligation
Chapter 4 — MANAGE
Article MG-4. Risk Treatment and Communication Plans
1 obligation