Compliance Library Blog Product Sign In

What AI Compliance Actually Requires — The Definitive List

March 10, 2026 | 12 min read | ReguLume
ai-compliance eu-ai-act obligations regulatory-intelligence

Last week I counted. Fourteen LinkedIn posts about AI governance in my feed. Eight used the phrase “comprehensive framework.” Five mentioned the EU AI Act by name. One — just one — cited a specific article number.

This is the state of AI compliance discourse in 2026. Loud on principles. Pretty quiet on specifics.

What I haven’t seen anyone publishing: the actual list. Not a framework summary. Not a governance maturity model. Not another thought leadership piece about “the importance of responsible AI.” The list — obligation by obligation, regulation by regulation — of what AI compliance actually requires.

We built it. 2,964 specific obligations across 15 regulations.

Here’s every one of them.


The Number Nobody’s Publishing

“Implement a risk management system.”

That’s what most AI compliance guides tell you about EU AI Act Article 9. It’s not wrong. It’s just not useful.

Article 9 contains 23 discrete obligations. They specify what the risk management system must include, how it must be documented, when it must be updated, and what evidence you need to prove it works. “Implement a risk management system” covers a paragraph of guidance. The 23 obligations cover what an auditor will actually check.

This is the gap between framework-level advice and obligation-level compliance. One sounds authoritative. The other is actually enforceable.

We decomposed 15 regulatory texts — EU, US federal, US state, and international standards — into individual, enforceable obligation points. The total: 2,964.

Regulation Jurisdiction Obligations
GDPR EU 630
EU DORA EU 606
ISO 42001 International 503
EU AI Act EU 334
CCPA/CPRA California 292
NIST AI RMF US Federal 137
Texas TRAIGA Texas 99
CAN-SPAM US Federal 63
Illinois AIVIA Illinois 62
CA GenAI Transparency California 58
Utah AI Policy Act Utah 49
NYC Local Law 144 New York City 48
OR AI Companion Safety Oregon 30
CA Chatbot Safety California 29
Colorado AI Act Colorado 24
Total 2,964

That’s the denominator nobody mentions. When someone says their organization is “AI compliant” — compliant with which of these 2,964 requirements?


What an “Obligation” Actually Is

A regulation is a document. An obligation is a specific, enforceable requirement extracted from that document.

The distinction matters more than most compliance professionals realize.

Take the EU AI Act’s transparency provisions. Article 13(1) states that high-risk AI systems “shall be designed and developed in such a way that their operation is sufficiently transparent to enable deployers to interpret the system’s output and use it appropriately.”

One article. Multiple obligations: the system must provide interpretable outputs, the deployer must be enabled to use those outputs appropriately, and “sufficiently transparent” must be defined and documented per system. Each needs separate evidence. Separate controls. Separate audit documentation.

Framework-level guidance flattens all of this into a single checkbox: “Transparency — Yes/No.”

An auditor won’t accept that.


11 Types of Obligations

Not all obligations are the same kind of requirement. The EU AI Act itself organizes its requirements into distinct categories — each article or chapter heading names a specific obligation type. Those categories recur across ISO 42001, NIST AI RMF, and GDPR. We use the regulation’s own structural vocabulary to classify all 2,964 obligations into 11 types.

ProhibitionsEU AI Act Chapter II, Article 5: “Prohibited AI Practices.” Things you cannot do. Social scoring systems, certain biometric categorization uses, emotion recognition in workplaces and schools. These are binary — violate them and you face the maximum penalty tier. EUR 35 million or 7% of global turnover, whichever is higher.

RequirementsEU AI Act Chapter III, Section 2: “Requirements for High-Risk AI Systems.” Affirmative actions you must take. The baseline capabilities and properties your AI systems must demonstrate. Articles 8 through 15 decompose these into specific technical and organizational mandates.

Risk managementEU AI Act Article 9: “Risk management system.” ISO 42001 Annex A.2: “AI Risk Management.” Ongoing processes for identifying, assessing, and mitigating risks throughout the AI lifecycle. Not a one-time assessment. A living system with update triggers, escalation criteria, and evidence of continuous operation.

Data governanceEU AI Act Article 10: “Data and data governance.” ISO 42001 Annex A.5: “Data Governance and Data Quality.” Rules for training data, validation data, and testing data. Relevance. Representativeness. Freedom from errors. Completeness. Article 10 is one of the most technically demanding articles in the EU AI Act — and it has more cross-references to GDPR than any other provision.

DocumentationEU AI Act Article 11: “Technical documentation.” ISO 42001 Annex A.3. Records you must create and maintain. Technical documentation under Article 11. Quality management records under Article 17. Logs of automatic recording under Article 12. This is where compliance teams spend the most time and where the most gaps hide.

TransparencyEU AI Act Article 13: “Transparency and provision of information to deployers.” Chapter IV: “Transparency Obligations.” GDPR Article 12: “Transparent information.” ISO 42001 Annex A.3. Disclosures you must make to users, regulators, or the public. Tell users they’re interacting with AI. Disclose the logic behind automated decisions. Publish information about training data. GDPR, the EU AI Act, and NYC LL144 all impose distinct transparency requirements — sometimes overlapping, sometimes contradictory.

Human oversightEU AI Act Article 14: “Human oversight.” ISO 42001 Annex A.4: “Accountability and Human Oversight.” NIST AI RMF MAP 3.5. Requirements for meaningful human control over AI decisions. Not “human in the loop” as a checkbox. Specific capabilities that human reviewers must have, specific authorities they must exercise, specific conditions under which they must intervene. Article 14 alone contains 9 separate oversight obligations.

Conformity assessmentEU AI Act Articles 40-49, Article 43: “Conformity assessment.” Processes for proving your system meets requirements before you deploy it. Third-party assessment for high-risk systems listed in Annex III. Self-assessment for others. Each path has distinct procedural obligations with different documentation requirements.

RegistrationEU AI Act Article 49: “Registration.” Article 71: “EU database for high-risk AI systems.” Requirements to register AI systems in official databases before deployment. Administrative — but non-compliance penalties are significant.

MonitoringEU AI Act Article 72: “Post-market monitoring.” ISO 42001 Annex A.8: “Monitoring, Performance Evaluation, and Continual Improvement.” NIST AI RMF MEASURE 2.4. Post-deployment obligations that never end. Track performance. Detect drift. Report incidents. Article 72’s post-market monitoring plan is separate from Article 62’s serious incident reporting — each with its own timeline, format, and evidence requirements.

ReportingEU AI Act Article 73: “Reporting of serious incidents.” GDPR Articles 33-34. ISO 42001 Annex A.10: “AI Incident Management.” Specific obligations to report to regulators, users, or the public on defined schedules. Incident reports. Annual compliance summaries. Public transparency reports. Each regulation defines its own reporting cadence and content requirements.

These aren’t categories we invented. They’re the regulatory texts’ own organizational structure — applied consistently across 15 frameworks. The distribution across regulations is uneven. GDPR concentrates on data governance and transparency. The EU AI Act loads heavily on risk management and documentation. US state laws tend to focus on transparency and human oversight. ISO 42001 distributes requirements across all 11 types.

Understanding which obligation types apply to your systems — and from which regulations — is the first step toward knowing whether you’re compliant or just hoping you are.


EU AI Act: 334 Obligations

The most complex AI-specific regulation in force. 334 obligation points spread across 85 articles, 180 recitals, and 13 annexes.

Four risk tiers structure everything.

Unacceptable risk. Banned outright. Social scoring, certain biometric surveillance, manipulation of vulnerable groups. No compliance path — deployment is prohibited.

High risk. The heaviest obligation set. Systems classified under Annex III categories: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, administration of justice. High-risk systems face Articles 6 through 27 — risk management, data governance, technical documentation, transparency, human oversight, accuracy, robustness, and cybersecurity. Each article decomposes into 5-23 individual obligations.

Limited risk. Primarily transparency obligations. Chatbots, deepfakes, emotion recognition systems. Narrower requirements, but still specific and enforceable.

Minimal risk. Voluntary codes of conduct. No binding obligations beyond general-purpose AI rules for qualifying systems.

The high-risk system deadline is August 2, 2026. Five months from today. Compliance timelines for organizations that haven’t started run 8-14 months.

That arithmetic should concern you.


US State Laws: More Than You Think, Fewer Than LinkedIn Claims

“Twenty US states are enforcing AI regulations.” You’ve seen this claim. It’s wrong.

The actual count of enacted, enforceable AI-specific laws: seven.

Colorado AI Act — 24 obligations. Enforcement begins June 30, 2026 — two months before the EU AI Act deadline. Focuses on “consequential decisions” made by high-risk AI systems. Colorado grants a rebuttable presumption of compliance to organizations following NIST AI RMF. That single provision makes NIST’s 137 obligations directly relevant to every organization operating in Colorado.

Utah AI Policy Act — 49 obligations. Disclosure-focused. Requires clear labeling when AI generates content or interacts with consumers.

NYC Local Law 144 — 48 obligations. Targets automated employment decision tools specifically. Annual bias audits by independent auditors, published audit summaries, and candidate notification requirements. The most prescriptive US law for a single AI use case.

Illinois AIVIA — 62 obligations. Covers AI in video interviews — consent requirements, data retention rules, and demographic analysis restrictions.

Texas TRAIGA — 99 obligations. Broader scope than most state laws. Covers high-risk AI in healthcare, financial services, education, and criminal justice.

Oregon AI Companion Safety Act — 30 obligations. Targets AI companion chatbots — disclosure requirements, minor protections (hourly break reminders, content prohibitions, emotional manipulation bans), crisis protocols, annual reporting.

California Companion Chatbot Safety Act — 29 obligations. Suicide and self-harm prevention protocols for AI chatbots, annual reporting to CA Dept of Public Health, private right of action with fee-shifting.

Seven laws. 341 state-level obligations. Beyond them, LegiScan tracks 550+ active AI bills across 45+ states. California alone has the GenAI Transparency Act — 58 obligations already in force — plus dozens more pending.

The US AI regulatory landscape isn’t hypothetical. It’s fragmented, overlapping, and growing by the month.


The Standards That Became Requirements

Two frameworks occupy a strange category: technically voluntary, practically mandatory.

NIST AI RMF — 137 obligations organized across four functions: Govern, Map, Measure, Manage. A voluntary framework by design. But Colorado’s safe harbor provision changed the calculus. Comply with NIST AI RMF and you get a presumption of compliance — turning 137 “recommended practices” into 137 things your legal team now cares about. Other states are expected to follow Colorado’s lead.

ISO 42001 — 503 obligations covering AI management systems. Certification is increasingly a procurement prerequisite. Enterprise clients are adding ISO 42001 to RFPs. Insurance underwriters are asking about it. The distinction between “required by law” and “required by your largest customer” is academic when the contract is on the table.

Together, NIST and ISO add 640 obligations most organizations will need to address — not because a regulator demands it today, but because their customers, insurers, and partners will.


The Regulations You’re Already Subject To

AI compliance doesn’t exist in isolation. Three regulations in our dataset apply to any organization processing personal data or operating in regulated industries — regardless of whether they consider themselves “AI companies.”

GDPR — 630 obligations. If your AI system processes personal data of EU residents — and nearly all do — GDPR’s data protection requirements layer directly on top of the EU AI Act. Data governance under AI Act Article 10 explicitly references GDPR’s purpose limitation, data minimization, and lawful basis requirements. These aren’t separate compliance tracks. They’re interleaved.

CCPA/CPRA — 292 obligations. California’s privacy framework applies to businesses meeting revenue or data volume thresholds. Automated decision-making using personal information triggers specific opt-out rights, access rights, and transparency requirements that compound with any AI-specific obligations.

EU DORA — 606 obligations. The Digital Operational Resilience Act applies to financial entities in the EU. If your AI system operates within a bank, insurer, or investment firm, DORA’s ICT risk management, incident reporting, and third-party oversight requirements apply alongside the AI Act. 606 obligations from a regulation most AI compliance conversations never mention.

These aren’t peripheral requirements. They’re concurrent.

An AI system deployed in a European bank faces EU AI Act + GDPR + DORA obligations simultaneously. A marketing AI in California hits CCPA/CPRA + CAN-SPAM + CA GenAI Transparency at minimum. The obligation count compounds — and most organizations are tracking each regulation in separate spreadsheets maintained by different teams who rarely compare notes.


So What Does This Mean for You?

If you’re a compliance consultant, you already know these regulations exist. You might know them well. The question isn’t whether you understand Article 9. It’s whether your clients can prove they satisfy its 23 specific requirements when an auditor asks.

2,964 obligations is the scope of the problem.

Not the abstract “we need to think about AI governance” problem. The concrete “an auditor is requesting evidence against each of these requirements” problem.

Framework-level governance is necessary. Nobody can dispute that. You need accountability structures, RACI models, executive sponsorship, risk appetite statements, policy documents. Governance answers who is responsible.

It doesn’t answer what they’ve done.

When a regulator or vendor auditor asks which Article 9 requirements have been satisfied — with what evidence, at what confidence level, for which systems — the governance framework goes silent. Obligation-level compliance mapping speaks up, and speaks clearly.

We built the database. Every obligation from 15 regulations, decomposed from the actual legal text, searchable by regulation, obligation type, risk classification, and applicable role. With the source citation for every single one.

Browse the complete obligation database — free, no account required — at regulume.com/compliance. Search by regulation. Filter by obligation type. Drill into any article.

Because if you’re going to tell a board their AI systems are compliant, you should know exactly which of the 2,964 requirements you’ve checked.

And which ones you haven’t.


ReguLume decomposes 15 AI and data regulations into 2,964 specific obligations. Browse the full obligation database at regulume.com/compliance.

Map obligations to your AI systems

ReguLume covers 2,964 obligations across 15 regulations. Score your compliance posture in hours, not months.

Get Started

Start your compliance assessment

Map obligations to your AI systems, identify gaps, and generate board-ready reports. Plans start at $149/mo.

Get Started