DPIA for AI Systems: When It's Required, What It Must Cover, and How Most Organizations Get It Wrong
A DPO opens the compliance ticket queue on a Monday morning. Three new AI deployments need sign-off. A customer service chatbot. A fraud detection model. An internal HR screening tool.
She asks the same question for each: “Where’s the DPIA?”
Three project leads give the same answer: “We didn’t think we needed one.”
They were wrong three times. And they’re not alone.
TL;DR: GDPR Article 35 makes DPIAs mandatory for most AI systems processing personal data — yet the majority of organizations skip them entirely. The EDPB’s December 2024 opinion expanded this obligation to virtually all LLM deployments. A compliant DPIA isn’t a checkbox. It’s a structured risk assessment that must happen before processing begins, not after the system is in production.
When Is a DPIA Actually Mandatory for AI?
78% of organizations now report using AI in at least one business function, up from 55% two years prior (Netguru, 2026). The majority of those deployments trigger a mandatory DPIA under GDPR Article 35 — and most organizations don’t realize it.
Article 35(1) requires a DPIA “where a type of processing… is likely to result in a high risk to the rights and freedoms of natural persons.” Three specific triggers in Article 35(3) apply directly to AI:
Systematic evaluation of personal aspects. Any AI system that profiles individuals — credit scoring, hiring algorithms, behavioral analysis, customer segmentation — triggers this criterion. It doesn’t matter whether the profiling produces a “decision.” The evaluation itself is enough.
Automated decision-making with legal or significant effects. If your AI system influences whether someone gets a loan, a job, insurance coverage, or access to a service — and it does so without meaningful human intervention — you need a DPIA. Article 22 and Article 35 work in tandem here.
Large-scale processing of sensitive data. AI systems that process health records, biometric data, political opinions, or trade union membership at scale require a DPIA regardless of how sophisticated your security controls are.
What catches most organizations off guard is cumulative triggering. The EDPB’s guidelines state that if processing meets two or more criteria from their nine-factor list — which includes new technologies, systematic monitoring, and vulnerable data subjects — a DPIA is “more likely” required. Most AI systems hit at least three.
What Changed in December 2024?
The EDPB’s Opinion 28/2024, issued December 17, 2024, fundamentally expanded who needs to conduct a DPIA for AI systems (EDPB, 2024). The opinion addressed data protection aspects of AI models and reached a conclusion that should concern every enterprise deploying third-party LLMs.
The board found that anonymization claims by AI model providers require thorough case-by-case assessment — including model design analysis, testing, and documentation. Because most LLMs cannot meet the anonymization bar, controllers deploying third-party LLMs must conduct comprehensive DPIAs under Article 35.
Read that carefully. If your organization uses Claude, GPT, Gemini, or any other LLM that processes personal data — even indirectly — you’re likely the data controller. And the EDPB just told you that you can’t rely on the provider’s anonymization claims to avoid a DPIA.
This isn’t a future obligation. It’s enforceable now.
What Does a Compliant DPIA Actually Contain?
EUR 7.1 billion in cumulative GDPR fines have been issued since May 2018 (DLA Piper, January 2026). A growing share of those fines cite inadequate or missing impact assessments. But what separates a compliant DPIA from a performative one?
Article 35(7) specifies four mandatory elements. Most organizations hit the first two and miss the last two entirely.
1. A systematic description of the processing. Not “we use AI for customer service.” A detailed account of what data enters the system, how it’s processed, what outputs are generated, who receives those outputs, and what the legal basis is for each processing activity. For AI systems, this includes the model architecture, training data provenance (if known), and the decision logic — or an honest acknowledgment that the decision logic isn’t fully explainable.
2. An assessment of necessity and proportionality. Why does this processing need to happen? Could you achieve the same purpose with less data, less automation, or less invasive means? This is where most AI DPIAs get superficial. “We need AI because it’s more efficient” isn’t a proportionality assessment. You need to demonstrate that the specific personal data processed, at the specific scale, through the specific model, is proportionate to the stated purpose.
3. An assessment of risks to data subjects. Not risks to the business — risks to the people whose data you’re processing. What happens if the AI produces an incorrect output? What if it discriminates? What if it leaks personal data through its outputs? What if a data subject can’t understand why a decision was made about them? Each risk needs a likelihood rating, a severity rating, and a description of its potential impact on individuals.
4. Measures to address those risks. Specific, documented controls that mitigate each identified risk. Not “we follow best practices.” Concrete measures: encryption specifications, access control lists, human review workflows, accuracy monitoring thresholds, data minimization rules, and retention limits. Each measure should map to a specific risk from step three.
The Three Failures That Get Organizations Fined
Healthcare sector DPIA penalties jumped to an average of EUR 203,000 per violation — up from EUR 17,500 previously — driven by ransomware incidents linked to missing DPIAs (SecurityWall, 2026). That 10x increase in average penalty per violation reflects regulators connecting absent DPIAs to downstream breach severity.
The pattern repeats across three failure modes.
Failure 1: Treating the DPIA as a Checkbox
A 30-page template filled with generic risk descriptions and boilerplate mitigations isn’t a DPIA. It’s documentation theater. Regulators can tell the difference — and they’re increasingly calling it out.
The Hamburg DPA fined a financial services firm EUR 492,000 in September 2025 for automated credit card rejections without adequate transparency (Clyde & Co, 2025). Applicants with demonstrably good credit were refused by algorithm. The company couldn’t explain the logic. A proper DPIA would have identified that explainability gap before the system went live — not after the regulator came calling.
Failure 2: Conducting the DPIA After Deployment
Article 35 is explicit: the assessment must occur “prior to the processing.” Not during. Not after. Prior.
Yet the most common pattern we see is this: the engineering team builds and deploys the AI system. Three months later, legal sends compliance a request to “do the DPIA paperwork.” By then, the system architecture is fixed. The data flows are established. The vendor contracts are signed. The DPIA becomes a retrospective rationalization of decisions already made — not a genuine risk assessment that could have shaped those decisions.
443 personal data breach notifications now occur per day across Europe — a 22% year-over-year increase (DLA Piper, 2026). Many of those breaches implicate DPIA failures. The assessment that should have caught the risk was either never done or done too late to matter.
Failure 3: Missing the Cross-Border Transfer Analysis
Your AI model runs in eu-west-1. Great. But where does the API call go? If you’re using a US-based AI provider — and most organizations are — personal data is crossing borders. The DPIA must assess those transfers explicitly.
The largest single GDPR fine in 2025 was EUR 530 million, issued by Ireland’s DPC for international data transfer violations (DLA Piper, 2025). Transfer impact assessments aren’t optional appendices to your DPIA. They’re core components. If your AI vendor processes data in the US, your DPIA needs to document the transfer mechanism (SCCs, adequacy decision, or derogation), the supplementary measures in place, and your assessment of whether the destination country’s surveillance laws undermine those protections.
The EU AI Act Just Made This Harder
EU AI Act Article 27 requires deployers of high-risk AI systems to conduct Fundamental Rights Impact Assessments (FRIAs) by August 2, 2026 (A&O Shearman, 2025). High-risk categories under Annex III include critical infrastructure, employment, essential services, law enforcement, education, and migration.
Here’s what matters for DPIAs: Article 27(4) allows organizations to consolidate their DPIA and FRIA into a single document. This is a practical concession — but it means your DPIA now needs to cover fundamental rights impacts that go beyond data protection. Discrimination. Freedom of expression. Access to essential services. The right to an effective remedy.
If you’re already struggling to produce a compliant DPIA under Article 35 alone, adding FRIA obligations to the same document will expose every gap in your current process.
The organizations that have obligation-level mapping — knowing exactly which requirements from which regulations apply to each specific AI system — can build integrated DPIA/FRIA documents efficiently. Everyone else is going to be doing it manually, regulation by regulation, hoping they haven’t missed something.
Why Obligation-Level Mapping Makes DPIAs Actionable
2,245 GDPR fines have been documented through March 2025, with an average fine of approximately EUR 2.36 million (CMS Law, 2025). The organizations behind those fines didn’t set out to violate GDPR. They missed specific obligations that applied to their specific processing activities.
A DPIA that references “GDPR Article 35” as a single line item isn’t useful. Article 35 contains multiple sub-obligations. Article 22 contains automated decision-making requirements. Article 13 and 14 contain transparency obligations. Article 25 requires data protection by design. Article 32 requires appropriate security measures. Each of these creates discrete, testable requirements that a DPIA must address.
Obligation-level mapping breaks regulations into their individual requirements and matches them to specific systems. Instead of “we comply with GDPR,” it produces “Article 35(7)(a) requires a systematic description of processing — here is ours for System X, including data flows, legal basis, and model architecture.”
That’s the difference between a DPIA that satisfies a regulator and one that satisfies a project manager.
The EDPB’s 2026 Coordinated Enforcement Framework targets transparency obligations under Articles 12-14 (SecurePrivacy, 2026). All EU national DPAs are coordinating around how organizations communicate personal data handling. Those transparency requirements are direct outputs of a properly conducted DPIA. If your DPIA didn’t produce clear, specific transparency documentation, the coordinated enforcement wave will find that gap.
What a Defensible AI DPIA Process Looks Like
The CNIL — France’s data protection authority — publishes the most granular AI DPIA guidance available. Their self-assessment guide for AI systems provides a structured checklist that goes beyond Article 35’s minimum requirements.
Here’s a practical framework that synthesizes the CNIL’s guidance with the EDPB’s Opinion 28/2024:
Before development starts: Identify processing purposes. Map data flows. Determine legal basis. Assess whether the AI system triggers mandatory DPIA criteria. If it does — and for most AI systems it will — begin the DPIA before writing the first line of code.
During development: Document model selection rationale. Record training data sources and any personal data included. Define accuracy thresholds and what happens when the model falls below them. Establish human review workflows. Specify data retention limits for inputs, outputs, and logs.
Before deployment: Complete the risk assessment. Document every identified risk, its likelihood, its severity, and the specific measure that addresses it. Conduct the transfer impact assessment for any cross-border data flows. Get sign-off from your DPO (or privacy lead if DPO appointment isn’t required).
After deployment: Monitor. The DPIA isn’t a one-time document. Article 35(11) requires ongoing review “at least when there is a change in the risk represented by processing operations.” Model updates, new data sources, expanded use cases, vendor changes — each one triggers a DPIA review.
Frequently Asked Questions
Does every AI system require a DPIA?
Not technically — but practically, most do. Any AI system that profiles individuals, makes automated decisions with significant effects, or processes sensitive data at scale triggers Article 35. The EDPB’s December 2024 opinion expanded this to include virtually all LLM deployments processing personal data. If you’re unsure, the safer course is to conduct one.
Can we use the AI provider’s DPIA instead of our own?
No. As the data controller, the DPIA obligation falls on you — not your vendor. The provider’s security documentation and DPA inform your assessment, but they don’t replace it. You need to assess risks specific to your processing context, your data subjects, and your deployment environment.
What happens if we deploy without a DPIA?
Article 83(4) allows fines up to EUR 10 million or 2% of annual global turnover for failing to conduct a required DPIA. Beyond fines, a missing DPIA means you can’t demonstrate compliance during a regulatory inquiry — which typically escalates the investigation’s scope and severity.
How often should we update the DPIA?
Article 35(11) requires review “at least when there is a change in the risk.” For AI systems, that means: model version changes, new data sources, expanded use cases, changes in data transfer mechanisms, and significant shifts in processing volume. In practice, quarterly reviews with event-triggered updates are the emerging standard.
Can the DPIA and the EU AI Act FRIA be the same document?
Yes — Article 27(4) of the AI Act explicitly allows consolidation. But the FRIA adds fundamental rights dimensions (discrimination, freedom of expression, access to services) that a standard GDPR DPIA doesn’t cover. If you consolidate, make sure those additional dimensions are addressed, not just appended.
The Cost of Getting This Wrong
EUR 1.2 billion in GDPR fines were issued in 2025 alone (DLA Piper, 2026). That figure broadly matched 2024, signaling sustained enforcement intensity rather than a one-year spike. The regulators aren’t backing down. They’re coordinating.
The organizations that treat DPIAs as a genuine risk assessment tool — conducted before processing, mapped to specific obligations, reviewed on a defined cadence — don’t just avoid fines. They deploy AI faster. Because when the DPO asks “where’s the DPIA?” the answer isn’t “we didn’t think we needed one.”
It’s “here — with the reasoning chain, the risk assessment, the transfer analysis, and the mitigation evidence already documented.”
That’s the difference between AI deployment that stalls at the compliance gate and AI deployment that clears it.
Sources: EDPB Opinion 28/2024. DLA Piper GDPR Fines and Data Breach Survey, January 2026. CMS Law GDPR Enforcement Tracker Report 2024/2025. Hamburg HmbBfDI enforcement action, September 2025 (via Clyde & Co). SecurityWall GDPR Enforcement Trends 2026. A&O Shearman AI Act FRIA analysis. CNIL AI DPIA self-assessment guide. Netguru AI Adoption Statistics 2026. SecurePrivacy GDPR Compliance 2026.
Map obligations to your AI systems
ReguLume covers 2,964 obligations across 15 regulations. Score your compliance posture in hours, not months.
Get Started