EU AI Act Article 9: The 23 Obligations Nobody Summarizes
“Implement a risk management system.”
That’s the guidance. The entire output of a $2,400/hour advisory session, distilled into six words. It appears on slides at compliance conferences. It anchors thought leadership articles. It fills the “AI Risk Management” row in every governance framework spreadsheet from Big Four firms to boutique consultancies.
It is also completely useless.
Article 9 of the EU AI Act does not say “implement a risk management system” and leave it there. It contains 9 paragraphs, 4 sub-article breakdowns, and — when you decompose the legal text into discrete, enforceable requirements — 23 specific obligations. Each one auditable. Each one carrying penalties up to EUR 15 million or 3% of global annual turnover under Article 99(4).
An auditor won’t accept “we have a risk management framework.” They’ll ask which of the 23 requirements you’ve satisfied, with what evidence, for which systems.
Here’s every one of them.
What Article 9 Actually Says
Most compliance guides quote Article 9(1) and stop. That single paragraph establishes the obligation to have a risk management system. The remaining 8 paragraphs define what that system must contain, how it must operate, what it must produce, and who it must protect.
The 23 obligations fall into five groups: establishment requirements, risk identification steps, risk management measures, testing obligations, and special considerations. Miss one group and the system is incomplete — regardless of how polished your risk register looks.
Group 1: Establish the System (Article 9(1))
Three obligations before anything else happens.
Obligation 1 — Establish a risk management system. The system must exist as a defined, identifiable construct — not a collection of ad hoc practices scattered across departments. Article 9(1) uses four verbs in sequence: established, implemented, documented, maintained. Each implies a distinct requirement.
Obligation 2 — Implement the system. Establishment and implementation are separate obligations. A documented risk management policy sitting in a SharePoint folder is established. A system where teams actively execute risk assessments against defined criteria is implemented. The distinction matters in audit.
Obligation 3 — Document the system. The system itself — not just its outputs — must be documented. What the system includes, how it operates, who is responsible, what triggers a review. Article 11 and Annex IV reinforce this: Annex IV(2)(e) requires “a detailed description of the risk management system in accordance with Article 9” as part of technical documentation.
Obligation 4 — Maintain the system. Maintenance is ongoing. Not annual. Not triggered only by incidents. The system must be kept current with the AI system’s evolution, deployment context changes, and post-market data.
Four obligations from one paragraph. Most summaries count it as one.
Group 2: Risk Identification (Article 9(2))
Article 9(2) is where the specificity lives. It defines the risk management system as “a continuous iterative process planned and run throughout the entire lifecycle” — then prescribes four mandatory steps.
Obligation 5 — Continuous iterative operation. The system is not a point-in-time assessment. Article 9(2) explicitly requires it to be planned and run throughout the entire lifecycle. That means pre-deployment, deployment, and post-deployment. A risk assessment conducted once during development does not satisfy this.
Obligation 6 — Regular systematic review. Separate from continuous operation. The system requires “regular systematic review and updating.” Regular implies a defined cadence. Systematic implies a structured methodology. Updating implies the system changes based on what reviews find.
Obligation 7 — Identify and analyze known risks (9(2)(a)). Risks that the high-risk AI system can pose to health, safety, or fundamental rights when used in accordance with its intended purpose. Three risk domains — health, safety, fundamental rights — each requiring separate analysis. “Known risks” means risks identifiable at the current state of the art.
Obligation 8 — Identify and analyze reasonably foreseeable risks (9(2)(a)). Same sub-article, distinct requirement. “Reasonably foreseeable” extends beyond known risks to risks that a competent person could anticipate. The standard is objective, not subjective — what a reasonable expert in the field would identify.
Obligation 9 — Estimate and evaluate misuse risks (9(2)(b)). Risks from “conditions of reasonably foreseeable misuse.” Not just intended use — foreseeable misuse. If users could plausibly apply the system outside its intended purpose, the risk management system must account for that. This is where many initial assessments fail: they analyze intended use and ignore the ways real users will actually interact with the system.
Obligation 10 — Evaluate post-market risks (9(2)(c)). Risks arising from data gathered by the post-market monitoring system under Article 72. This creates a dependency: you cannot fully satisfy Article 9(2)(c) without an operational Article 72 monitoring system feeding data back into your risk assessment. The two articles are coupled. Most organizations treat them as independent workstreams.
Obligation 11 — Adopt risk management measures (9(2)(d)). Not just identify risks — adopt “appropriate and targeted” measures to address them. Appropriate means proportionate to the risk. Targeted means specific to the risks identified in steps (a) through (c), not generic controls applied uniformly.
Seven obligations from one paragraph. Each requires separate evidence.
Group 3: Risk Management Measures (Articles 9(3) and 9(4))
Identifying risks is half the work. Articles 9(3) and 9(4) specify how risk management measures must be designed and what they must achieve.
Obligation 12 — Consider combined effects of requirements (9(3)). Risk management measures must account for “the effects and possible interactions resulting from the combined application of the requirements set out in this Chapter.” Translation: you cannot address Article 9 in isolation. Your risk measures must consider how they interact with data governance (Article 10), transparency (Article 13), human oversight (Article 14), accuracy (Article 15). A risk mitigation that satisfies Article 9 but undermines Article 13 transparency is non-compliant.
Obligation 13 — Minimize risk effectively (9(3)). Measures must minimize risks “more effectively” — not just address them. Effective minimization is the standard, which implies measurable reduction, not just the existence of a control.
Obligation 14 — Balance implementation of measures (9(3)). Achieve “an appropriate balance in implementing the measures to fulfil those requirements.” Risk management is not maximalist. Over-engineering one control at the expense of others — restricting model outputs so aggressively that transparency becomes impossible, for example — fails this balance test.
Obligation 15 — Achieve acceptable residual risk per hazard (9(4)). The residual risk for each hazard must be “judged to be acceptable.” Per hazard. Not in aggregate. Each identified risk needs its own residual risk assessment after mitigation measures are applied.
Obligation 16 — Achieve acceptable overall residual risk (9(4)). Separate from per-hazard assessment. The overall residual risk of the entire high-risk AI system must also be acceptable. A system with 20 individually acceptable hazard-level risks could still present unacceptable aggregate risk.
Obligation 17 — Eliminate or reduce risks through design (9(4)(a)). First priority in the risk hierarchy: eliminate or reduce risks “through adequate design and development.” Not bolt-on controls. Design-level changes. This is safety-by-design — the risk management version of privacy-by-design under GDPR. If a risk can be designed out, a mitigation control is insufficient.
Obligation 18 — Implement mitigation and control measures (9(4)(b)). Where risks cannot be eliminated by design, implement “adequate mitigation and control measures.” This is the second tier of the hierarchy. Only acceptable for risks where design elimination is not technically feasible.
Obligation 19 — Provide deployer information and training (9(4)(c)). Third tier: provide information required under Article 13 and, where appropriate, training to deployers. Risk management includes ensuring that deployers understand residual risks. This creates another cross-reference — your Article 9 risk management system must feed into your Article 13 transparency obligations.
Obligation 20 — Consider deployer context (9(4) final clause). When eliminating or reducing risks, give “due consideration” to the deployer’s technical knowledge, experience, education, expected training, and the “presumable context” of use. A risk deemed acceptable for a deployer with PhD-level ML expertise is not automatically acceptable for a deployer with basic technical literacy.
Nine obligations across two paragraphs. The hierarchy — design first, then controls, then information — is not optional. It’s a prescribed sequence.
Group 4: Testing (Articles 9(5), 9(6), 9(7))
Three paragraphs dedicated to how risk management measures must be validated.
Obligation 21 — Test to identify appropriate risk measures (9(5)). Testing is mandatory. High-risk AI systems “shall be tested for the purpose of identifying the most appropriate and targeted risk management measures.” Testing must also ensure consistent performance for the intended purpose and compliance with Chapter III requirements.
Obligation 22 — Test throughout development and before deployment (9(7)). Testing must occur “at any point in time throughout the development process, and, in any event, prior to their being placed on the market or put into service.” Pre-deployment testing is the minimum. Development-phase testing is expected wherever appropriate.
Obligation 23 — Test against predefined metrics and thresholds (9(7)). Tests must use “prior defined metrics and probabilistic thresholds that are appropriate to the intended purpose.” No ad hoc validation. Metrics and thresholds must be defined before testing begins, and they must be appropriate — meaning justified and documented as relevant to the system’s intended purpose.
Article 9(6) adds that testing procedures “may include testing in real-world conditions in accordance with Article 60.” Permissive, not mandatory — but it signals that sandbox-only testing may face scrutiny for certain system types.
The Two Requirements That Most Teams Miss
Articles 9(8) and 9(9) are easy to overlook. They don’t describe process steps. They describe who you must think about and what you can integrate.
Article 9(8) — Vulnerable group impact assessment. Providers must consider whether the high-risk AI system “is likely to have an adverse impact on persons under the age of 18 and, as appropriate, other vulnerable groups.” This is not optional where relevant. A risk management system for a hiring tool that ignores age-related impacts — older workers, young graduates — has a gap.
Article 9(9) — Integration with existing risk processes. For providers already subject to internal risk management under other EU law — financial services firms under DORA, medical device manufacturers under MDR — the Article 9 risk management system “may be part of, or combined with” those existing processes. This is a relief valve, not a loophole. Combined doesn’t mean subsumed. The Article 9 requirements must still be met — they just don’t need a parallel, standalone system.
The August 2, 2026 Deadline
Five months away. The full application date for high-risk AI system obligations under the EU AI Act.
Here’s the math that matters: organizations starting from zero need 8-14 months to build a compliant risk management system. Not because the technology is hard. Because the organizational work is hard — identifying all high-risk systems, conducting the risk assessments, implementing design-level changes per 9(4)(a), establishing the testing infrastructure per 9(7), documenting everything per 9(1) and Article 11, and standing up the post-market monitoring system that Article 9(2)(c) depends on.
14 months from zero. 5 months until the deadline.
Those two numbers don’t intersect well.
What This Means for Your Team Today
If you’re advising clients on EU AI Act compliance, Article 9 is where most engagements start — and where most gap analyses reveal the deepest shortfalls.
Three things to do this week:
First, count your obligations. Take your current risk management documentation and map it against all 23 requirements. Most organizations satisfy 8-12 on the first pass. The gaps cluster in three areas: the continuous operation requirements (obligations 5 and 6), the risk hierarchy (obligations 17-19), and the testing specificity (obligation 23).
Second, check your cross-references. Article 9 does not stand alone. It creates dependencies on Article 10 (data governance informing risk identification), Article 13 (transparency as a risk measure), Article 14 (human oversight as a control), and Article 72 (post-market monitoring feeding back into risk assessment). A risk management system built without these connections is structurally incomplete.
Third, map evidence to obligations. For each of the 23 requirements, identify what evidence exists today. A risk register covers obligations 7-9. A testing report covers 21-23. But what covers obligation 12 — the combined effects analysis? What covers obligation 20 — the deployer context assessment? The obligations without evidence are your compliance gaps. That’s where the work starts.
The full text of Article 9 — decomposed into individual, searchable obligations with source citations — is available at regulume.com/compliance/. Browse by article, filter by obligation type, drill into the specific language. No account required.
Because “implement a risk management system” was never the requirement. These 23 obligations are.
ReguLume decomposes 15 AI and data regulations into 2,964 specific obligations — including all 334 from the EU AI Act. Browse the complete obligation database at regulume.com/compliance.
Map obligations to your AI systems
ReguLume covers 2,964 obligations across 15 regulations. Score your compliance posture in hours, not months.
Get Started