Your AI System Is Already Regulated — You Just Haven't Mapped the Overlaps
Three binders sit on a shelf in the compliance office of a mid-size European bank. One labeled GDPR. One labeled EU AI Act. One labeled DORA. Three different teams own them. Three separate compliance workstreams. Three sets of controls, three risk registers, three reporting cadences.
Nobody’s compared the binders.
This is how most organizations manage AI compliance in 2026 — regulation by regulation, silo by silo. The GDPR team handles data protection. The AI governance team owns the EU AI Act. The IT risk team manages DORA. They share an office. They don’t share a spreadsheet.
The problem isn’t that these teams don’t understand their own regulation. They do. The problem is what happens in the spaces between them — where Article 10 of the EU AI Act explicitly references GDPR, where DORA’s ICT requirements compound with AI Act monitoring obligations, where three different incident reporting timelines apply to the same event.
That’s where the gaps hide. And that’s where the fines live.
The Numbers
A European financial institution deploying AI systems faces three mandatory EU regulations simultaneously. Here’s the obligation count from each:
| Regulation | Obligations |
|---|---|
| GDPR | 630 |
| EU AI Act | 334 |
| EU DORA | 606 |
| Total | 1,570 |
1,570 individual, enforceable requirements. Not principles. Not framework guidance. Specific obligations that auditors check, regulators enforce, and courts reference in penalty calculations.
Most organizations assign each regulation to a different team. Reasonable — you want subject matter experts. But the regulations weren’t drafted in isolation. They reference each other. They assume shared infrastructure. And they create compound requirements that no single-regulation team is equipped to identify.
Managing 1,570 obligations in three separate spreadsheets doesn’t just create inefficiency. It creates invisible gaps — requirements that fall between teams, get counted twice, or contradict each other without anyone noticing.
Overlap Zone 1: Data Governance
AI Act Article 10 ↔ GDPR Articles 5, 6, 9, 25
This is the most technically demanding intersection in European AI regulation.
Article 10 of the EU AI Act requires that training, validation, and testing data for high-risk AI systems meet specific standards: relevance, representativeness, freedom from errors, completeness. It mandates data governance practices covering collection processes, data preparation, formulation of assumptions, and prior assessment of data availability. Seventeen obligations from a single article.
But Article 10 doesn’t operate alone. It explicitly states that training data processing “may” involve special categories of personal data under GDPR Article 9 — biometric data, health data, racial or ethnic origin — “to the extent that it is strictly necessary for the purposes of ensuring bias detection and correction.”
Read that again. The AI Act grants a narrow exemption from GDPR Article 9’s prohibition on processing sensitive personal data. But only for bias detection. Only when strictly necessary. And only with appropriate safeguards per GDPR Article 9(2)(g).
Your AI governance team reads Article 10 and builds data quality controls. Your GDPR team reads Article 9 and restricts sensitive data processing. Neither team realizes the AI Act carves out a conditional exception that requires both teams to coordinate — with documented justification for why the processing is “strictly necessary” and which GDPR safeguard applies.
Missing this intersection doesn’t just create a compliance gap. It creates two: the AI team may lack the training data needed for proper bias detection, and the data protection team may unknowingly block a legally permitted activity.
GDPR Article 25 compounds this further. Data protection by design and by default requires that AI systems processing personal data embed privacy safeguards from the design stage. Article 10’s data governance requirements don’t mention privacy by design. GDPR Article 25 doesn’t mention training data quality. The obligation to satisfy both — simultaneously, in the same system — exists only in the overlap.
Overlap Zone 2: Transparency
AI Act Article 13 ↔ GDPR Articles 12-14 ↔ DORA Article 10
Three regulations. Three transparency regimes. One AI system.
AI Act Article 13 requires that high-risk AI systems be “designed and developed in such a way that their operation is sufficiently transparent to enable deployers to interpret the system’s output and use it appropriately.” This means technical documentation about the system’s logic, intended purpose, level of accuracy, and known limitations — provided to the organization deploying the system.
GDPR Articles 12-14 require something different. When automated decision-making significantly affects individuals, data subjects must receive “meaningful information about the logic involved, as well as the significance and the envisaged consequences.” This isn’t for the deployer. It’s for the person affected by the AI’s decision — the loan applicant, the job candidate, the insurance customer.
DORA Article 10 adds a third layer. Financial entities must maintain and update documentation on their ICT systems — including AI systems — that is “sufficiently detailed to facilitate the understanding of risks.” This documentation serves supervisory authorities, not deployers or data subjects.
Three audiences. Three levels of detail. Three documentation formats.
The AI Act wants technical transparency for deployers. GDPR wants explainable outcomes for individuals. DORA wants risk documentation for regulators. An AI system in a bank must satisfy all three — and a transparency report written for one audience almost certainly doesn’t satisfy the other two.
Here’s where it gets operational. The AI Act team builds deployer-facing documentation. The GDPR team writes privacy notices with “meaningful information about the logic involved.” The IT risk team prepares ICT documentation for the supervisor. Three separate documents describing the same system, maintained by three separate teams, with no process to ensure they’re consistent.
When they contradict each other — and they will — the auditor notices before you do.
Overlap Zone 3: Risk Management
AI Act Article 9 ↔ DORA Chapter II
Article 9 of the EU AI Act mandates a risk management system for high-risk AI systems. Twenty-three obligations covering identification, analysis, estimation, and evaluation of risks — continuously, throughout the system lifecycle. The system must address foreseeable misuse, data biases, and interaction effects with other systems.
DORA Chapter II (Articles 5-16) requires financial entities to establish an ICT risk management framework. This includes identification of ICT-related business functions, risk assessment methodologies, protection and prevention measures, detection mechanisms, response and recovery procedures, and learning and evolving practices.
When the high-risk AI system is ICT infrastructure in a financial institution — and increasingly, it is — both frameworks apply. Concurrently.
The question nobody is asking: do you maintain one risk management system or two?
Article 9’s risk management is AI-specific. DORA’s is ICT-broad. Article 9 requires continuous risk assessment tied to AI system updates and retraining. DORA requires ICT risk assessment tied to business impact analysis and operational continuity. The risk taxonomies differ. The assessment frequencies differ. The escalation paths differ.
But the underlying system is the same.
Running parallel risk management processes for the same AI system — one satisfying Article 9, one satisfying DORA Chapter II — means duplicate controls, inconsistent risk ratings, and gaps where each team assumes the other team covers a requirement. Merging them requires someone who understands both frameworks deeply enough to reconcile the taxonomies.
That person usually doesn’t exist.
Overlap Zone 4: Incident Reporting
AI Act Article 73 ↔ GDPR Articles 33-34 ↔ DORA Article 19
A high-risk AI system in a European bank processes personal data, produces an incorrect output, and exposes customer records. One incident. Three reporting obligations. Three different timelines.
GDPR Articles 33-34: Notify the supervisory authority within 72 hours of becoming aware of the personal data breach. If the breach is likely to result in high risk to individuals, notify the affected data subjects “without undue delay.”
EU AI Act Article 73: Report serious incidents involving high-risk AI systems to the market surveillance authority. The reporting timeline: immediately after the provider establishes a causal link between the AI system and the incident, and no later than 15 days after becoming aware of it.
DORA Article 19: Report major ICT-related incidents to the competent authority using a standardized template. Initial notification, intermediate report, and final report — each with defined timelines and content requirements.
Three reports. Three authorities. Three timelines. Three templates.
The GDPR clock starts when you become “aware” of the breach. The AI Act clock starts when you establish a “causal link.” DORA’s clock starts at incident classification. These are different triggering events — and the gap between “awareness,” “causal link,” and “classification” can be hours or days.
Miss the 72-hour GDPR window because your AI team was still establishing the causal link for Article 73 reporting? That’s a GDPR violation. File the DORA report before the AI Act causal link is established, and your incident narrative may contradict your subsequent AI Act filing.
Coordination isn’t optional. It’s a regulatory requirement that no single regulation explicitly mandates.
Overlap Zone 5: Documentation
AI Act Article 11 ↔ GDPR Article 30 ↔ DORA Article 28
Documentation is where cross-regulation complexity becomes daily operational burden.
AI Act Article 11 requires technical documentation for high-risk AI systems before market placement. Annex IV specifies what this documentation must contain: general system description, detailed development elements, monitoring and functioning information, risk management documentation. The documentation must be kept up-to-date throughout the system’s lifecycle.
GDPR Article 30 requires records of processing activities — controllers and processors must maintain written records of all personal data processing operations, including purposes, data categories, recipients, transfer safeguards, and retention periods.
DORA Article 28 mandates that financial entities maintain and update an information register of all contractual arrangements on the use of ICT services provided by third-party service providers. This includes the nature of the services, the assessment of criticality, and the identification of sub-outsourcing chains.
When your high-risk AI system processes personal data and is provided by a third-party vendor to a financial institution — a scenario that is now common, not exceptional — all three documentation requirements apply to the same system.
Article 11 documentation describes what the system does and how it was built. Article 30 documentation records what personal data it processes and why. Article 28 documentation maps the vendor relationship and its criticality assessment.
Three document sets. Three update triggers. Three retention requirements. Maintained by three teams who may not know the other documents exist.
The practical failure mode: the AI team updates the Article 11 technical documentation after a model retrain. The GDPR team’s Article 30 records still reference the old processing purposes. The third-party risk team’s DORA register doesn’t reflect the vendor’s model update. Three documents, three versions of reality, one system.
Why This Matters More Than You Think
Cross-regulation overlaps create three categories of risk that single-regulation compliance programs miss entirely.
Contradictions. Where one regulation permits what another restricts. The AI Act’s Article 10 exception for processing sensitive data for bias detection contradicts GDPR Article 9’s general prohibition — unless the team managing Article 9 knows the exception exists and documents the legal basis. Contradictions aren’t theoretical. They’re where enforcement actions land.
Redundancies. Where three teams build three separate controls for the same underlying requirement. Three risk registers. Three incident response procedures. Three documentation repositories. Redundancy isn’t just wasteful — it’s a governance risk, because inconsistent duplicate controls are worse than a single control done well.
Gaps. Where every team assumes the requirement belongs to someone else. Transparency obligations that fall between the AI team and the GDPR team. Risk management requirements that fall between AI governance and IT risk. Documentation updates that fall between all three. Gaps are silent. They don’t appear in any team’s backlog because no team owns them.
The organization that manages GDPR, EU AI Act, and DORA as three separate programs will have three complete compliance workstreams — and still fail an audit. Not because any single program is deficient, but because the intersections were never mapped.
What Cross-Regulation Mapping Actually Looks Like
Mapping 1,570 obligations individually is necessary. Mapping their intersections is what separates compliance theater from defensible compliance.
Cross-regulation mapping means identifying every point where two or more regulations impose requirements on the same system, process, or data flow. It means documenting where those requirements align (shared controls that satisfy multiple obligations), where they conflict (requirements that need reconciliation and documented decisions), and where they compound (combined requirements that exceed what any single regulation demands alone).
This isn’t a quarterly exercise. Regulations evolve. Supervisory guidance changes interpretation. New case law shifts enforcement priorities. The map has to be living — updated when any of the three regulations issues new guidance.
For a European financial institution, the cross-regulation obligation map is the difference between “we’re compliant with GDPR” and “we’re compliant” — full stop.
The Practitioner Pivot
If you’re managing AI compliance for a financial institution — or consulting to one — the question isn’t whether these overlaps exist. You probably suspected they did.
The question is whether you’ve mapped them. Specifically. Obligation by obligation.
Not “we have a GDPR program and an AI Act program and they coordinate informally.” Not “our risk team covers DORA and they talk to the AI governance people.” The informal coordination model breaks at audit time, when the auditor asks for evidence that Article 10’s data governance requirements and Article 9’s processing restrictions have been reconciled — in writing, with a documented decision trail.
1,570 obligations. Five major overlap zones. Dozens of specific intersection points where requirements align, conflict, or compound.
Browse all 1,570 EU obligations — GDPR, EU AI Act, and DORA — in the ReguLume compliance library. Search by regulation, filter by obligation type, and see the cross-references that connect them.
Because compliance isn’t three separate checkmarks. It’s one map.
And most organizations haven’t drawn it yet.
ReguLume maps 2,964 obligations across 15 AI and data regulations — including the cross-references between them. Browse the full obligation database at regulume.com/compliance.
Map obligations to your AI systems
ReguLume covers 2,964 obligations across 15 regulations. Score your compliance posture in hours, not months.
Get Started