Compliance Library Blog Product Sign In

NIST-AI-RMF

NIST AI Risk Management Framework 1.0 (AI 100-1)

US Version 1.0 137 obligations
Showing 76–100 of 137 obligations

Title I — Foundational Information

Title II — AI RMF Core Framework

Chapter 1 — GOVERN

Chapter 2 — MAP

Chapter 3 — MEASURE

Article MS-1. Appropriate Methods and Metrics

9 obligations

NIST-RMF-MS-1-03 Monitoring

Regularly assess appropriateness of AI metrics and effectiveness of controls

Organizations must conduct regular assessments of whether their AI metrics remain appropriate and whether existing contr

NIST-RMF-MS-1-04 Monitoring

Include error reports in regular assessments of metrics and controls

Organizations must incorporate reports of errors into their regular assessments of AI metrics appropriateness and contro

NIST-RMF-MS-1-05 Risk Management

Consider potential impacts on affected communities in assessments

Organizations must include consideration of potential impacts on affected communities when regularly assessing the appro

NIST-RMF-MS-1-06 Human Oversight

Involve internal experts who were not front-line developers in assessments

Organizations must involve internal experts who did not serve as front-line developers of the AI system in regular asses

NIST-RMF-MS-1-07 Human Oversight

Involve independent assessors in regular assessments and updates

Organizations must involve independent assessors (external to the organization) in regular assessments and updates of AI

NIST-RMF-MS-1-08 Human Oversight

Consult domain experts in assessments as necessary per risk tolerance

Organizations must consult domain experts to support assessments when necessary based on their organizational risk toler

NIST-RMF-MS-1-09 Human Oversight

Consult users in assessments as necessary per organizational risk tolerance

Organizations must consult users of the AI system to support assessments when necessary based on their organizational ri

NIST-RMF-MS-1-10 Human Oversight

Consult external AI actors in assessments as necessary per risk tolerance

Organizations must consult AI actors external to the team that developed or deployed the AI system to support assessment

NIST-RMF-MS-1-11 Human Oversight

Consult affected communities in assessments as necessary per risk tolerance

Organizations must consult affected communities to support assessments when necessary based on their organizational risk

Article MS-2. Trustworthy Characteristics Evaluation

16 obligations

NIST-RMF-MS-2-01 Documentation

Document TEVV Test Sets, Metrics, and Tools

Organizations must document test sets, metrics, and details about the tools used during Testing, Evaluation, Validation,

NIST-RMF-MS-2-02 Requirement

Ensure Human Subject Evaluations Meet Protection Requirements

When conducting evaluations involving human subjects, organizations must ensure these evaluations meet applicable human

NIST-RMF-MS-2-03 Requirement

Measure and Demonstrate AI System Performance Criteria

Organizations must measure AI system performance or assurance criteria qualitatively or quantitatively and demonstrate t

NIST-RMF-MS-2-04 Documentation

Document Performance and Assurance Measures

Organizations must document the measures used to evaluate AI system performance or assurance criteria.

NIST-RMF-MS-2-05 Monitoring

Monitor AI System Functionality and Behavior in Production

Organizations must monitor the functionality and behavior of the AI system and its components when in production, as ide

NIST-RMF-MS-2-06 Requirement

Demonstrate AI System Validity and Reliability

Organizations must demonstrate that the AI system to be deployed is valid and reliable.

NIST-RMF-MS-2-07 Documentation

Document Generalizability Limitations

Organizations must document limitations of the generalizability of the AI system beyond the conditions under which the t

NIST-RMF-MS-2-08 Risk Management

Regularly Evaluate AI System for Safety Risks

Organizations must evaluate the AI system regularly for safety risks as identified in the MAP function.

NIST-RMF-MS-2-09 Requirement

Demonstrate AI System Safety and Risk Tolerance

Organizations must demonstrate that the AI system to be deployed is safe, its residual negative risk does not exceed the

NIST-RMF-MS-2-10 Requirement

Implement Safety Metrics for System Reliability and Monitoring

Organizations must ensure safety metrics reflect system reliability and robustness, real-time monitoring, and response t

NIST-RMF-MS-2-11 Requirement

Evaluate AI System Security and Resilience

Organizations must evaluate AI system security and resilience as identified in the MAP function.

NIST-RMF-MS-2-12 Documentation

Document Security and Resilience Evaluation Results

Organizations must document the results of AI system security and resilience evaluations.

NIST-RMF-MS-2-13 Risk Management

Examine Transparency and Accountability Risks

Organizations must examine risks associated with transparency and accountability as identified in the MAP function.

NIST-RMF-MS-2-14 Documentation

Document Transparency and Accountability Risk Analysis

Organizations must document the examination of risks associated with transparency and accountability.

NIST-RMF-MS-2-15 Transparency

Explain, Validate, and Document AI Model

Organizations must explain, validate, and document the AI model as identified in the MAP function to inform responsible

NIST-RMF-MS-2-16 Transparency

Interpret AI System Output Within Context

Organizations must interpret AI system output within its context as identified in the MAP function to inform responsible

Start your compliance assessment

Map obligations to your AI systems, identify gaps, and generate board-ready reports. Plans start at $149/mo.

Get Started