What Does a Completed EU AI Act FRIA Actually Look Like?
The EU AI Office was supposed to publish a FRIA template questionnaire. Article 27(5) says so explicitly. As of March 2026, it hasn’t arrived.
The August 2, 2026 deadline for high-risk AI system compliance hasn’t moved either.
So practitioners are improvising. Adapting DPIA frameworks. Guessing at scope. Hoping the template lands before the first audit does.
Every guide on the internet explains what a Fundamental Rights Impact Assessment is. None of them shows you a completed one. This post does — a worked example for an AI-powered resume screening system, section by section, with the Article 27 requirements mapped to actual content.
Why this example: When we decomposed Article 27 into its individual obligations, we found that most “FRIA guides” cover maybe 40% of what the regulation actually requires. The gap between “explain the concept” and “fill in every section” is where consultants lose 20+ hours per engagement.
[INTERNAL-LINK: EU AI Act Article 9 risk management obligations → article-level obligation breakdown]
Key Takeaways
Key Takeaways - The EU AI Office has not published its mandated FRIA template (Article 27(5)) as of March 2026 — practitioners must build their own interim frameworks. - Only 26.2% of companies have started concrete AI Act compliance activities (Deloitte/Civey, 2024). - A FRIA covers all EU Charter fundamental rights — not just privacy. That’s the critical difference from a GDPR DPIA. - AI resume screeners prefer white-associated names 85.1% of the time across 554 tested resumes (Brookings, 2025). - Article 27(4) lets you reuse existing DPIA work, reducing FRIA effort by an estimated 30-40% (EU AI Act Navigator, 2025).
What Is a FRIA and Who Needs One?
Only 35.7% of managers describe themselves as adequately prepared for AI Act compliance (Deloitte/Civey, 2024, n=500). Part of the problem is that most organizations don’t know whether they need a FRIA at all — or how it differs from the DPIA they’ve been doing for eight years.
Article 27 requires a FRIA before first deployment of any high-risk AI system by:
- Public bodies deploying high-risk AI
- Private entities providing public services — education, healthcare, housing, social services, administration of justice
- Any deployer using AI for creditworthiness assessment (Annex III 5(b)) or life/health insurance risk pricing (Annex III 5(c))
Recruitment AI falls under Annex III Point 4: systems used for “placing targeted job advertisements, analysing and filtering job applications, and evaluating candidates.” High-risk. FRIA required.
The scope difference from a DPIA matters more than most practitioners realize.
A DPIA asks whether processing personal data creates privacy risks. A FRIA asks whether an AI system threatens any fundamental right in the EU Charter — dignity, non-discrimination, fair working conditions, access to justice, freedom of expression. And unlike a DPIA, a FRIA applies even when the AI system processes no personal data at all.
Article 27(4) offers one relief valve: if you’ve already completed a DPIA under GDPR Article 35 for the same system, the FRIA “shall complement” it — not duplicate it. Organizations can reduce FRIA effort by an estimated 30-40% by reusing existing DPIA work (EU AI Act Navigator, 2025). That’s the bridge. But the other 60-70% is net-new territory.
[INTERNAL-LINK: GDPR and AI Act obligation overlaps → cross-regulation mapping post]
The Scenario: RecruitAI Resume Screening
For this walkthrough, we’re building a FRIA for a fictional but realistic deployer:
Company: TalentBridge GmbH, a mid-size German staffing agency (180 employees) System: “RecruitAI” — an LLM-based resume ranking tool that scores incoming applications against job descriptions Scale: ~500 applications/week across 12 client companies Why high-risk: Annex III Point 4(a) — AI used for “analysing and filtering job applications” and “evaluating candidates” Why FRIA required: TalentBridge provides recruitment services to two municipal employers (public sector clients), triggering Article 27(1)(a)
What follows is the FRIA content — section by section — as it would appear in TalentBridge’s compliance file.
FRIA Section 1: System Description and Intended Purpose
This section answers: What is the AI system, what does it do, and what decisions does it inform?
System name: RecruitAI v2.3 Provider: RecruitAI Solutions B.V. (Amsterdam) Deployer: TalentBridge GmbH (Berlin) CE marking status: Provider self-assessed per Article 43(2) Deployment date: Planned Q3 2026
Technical description: Cloud-hosted LLM (fine-tuned Mistral-7B) processes resume text against structured job requirements. Outputs a 0-100 relevance score per candidate. Top-scoring 20% are forwarded to human recruiters for interview selection.
Intended purpose: Reduce initial screening time from 4 hours to 30 minutes per requisition. System ranks candidates — it does not make autonomous hiring decisions.
Foreseeable misuse: Using relevance scores as the sole basis for rejection without human review. Applying the system to performance evaluation (not its intended purpose). Processing candidate video or audio data (prohibited under Article 5(1)(f) — emotion recognition in workplace).
Our finding: When we decomposed Article 27 into discrete obligations, we identified 11 specific content requirements for the system description section alone. Most published FRIA guides cover 4-5 of them.
FRIA Section 2: Affected Populations and Fundamental Rights at Risk
This section answers: Who is affected by this system, and which of their fundamental rights could be impacted?
AI resume screeners show measurable demographic bias. A 2025 Brookings Institution study tested three LLM embedding models across 554 resumes and nine occupations — white-associated names were preferred 85.1% of the time. Black-associated names were preferred just 8.6% of the time. In direct comparisons, Black male candidates were disadvantaged in 100% of matchups against white males (Brookings/Wilson & Caliskan, 2025).
That’s not hypothetical risk. That’s measured output from the same class of model TalentBridge plans to deploy.
Here’s how TalentBridge’s FRIA maps affected populations to specific Charter rights:
Directly affected: All job applicants processed by RecruitAI (~26,000/year) Disproportionately affected: Applicants with non-Western European names, women applying to male-dominated roles, candidates over 50, candidates with employment gaps (disability, caregiving)
Fundamental rights at risk: - Non-discrimination (Charter Art. 21): Proxy discrimination through name, address, or education institution encoding demographic signals - Fair working conditions (Charter Art. 31): Automated screening may systematically exclude qualified candidates from employment opportunities - Human dignity (Charter Art. 1): Reducing a candidate’s professional experience to a numerical score without context - Data protection (Charter Art. 8): Processing of CVs containing health information, ethnic background indicators, age - Freedom to choose an occupation (Charter Art. 15): Algorithmic gatekeeping that restricts access to job opportunities
The Workday class action puts this in enforcement context: a U.S. court certified a class potentially covering “millions of applicants” over 40 whose applications were processed — and allegedly rejected — by Workday’s AI screening tools. The claim: 1.1 billion applications rejected. The legal theory: disparate impact discrimination (Fisher Phillips, 2025).
EU enforcement under Article 99 carries fines up to EUR 15 million or 3% of global turnover for high-risk system non-compliance (Article 99).
[INTERNAL-LINK: EU AI Act penalty structure → article on compliance requirements]
FRIA Section 3: Risk Assessment
This section answers: How severe are the identified risks, and how likely are they to materialize?
The FRIA risk assessment isn’t a generic risk matrix. It maps each identified right to specific, concrete threats — with severity and likelihood grounded in evidence, not intuition.
Fundamental Right Risk Severity Likelihood Evidence Non-discrimination (Art. 21) Proxy discrimination via name encoding High High Brookings 2025: 85.1% white-name preference rate in LLM screening Non-discrimination (Art. 21) Gender bias in scoring High Medium Brookings 2025: men preferred 51.9% vs women 11.1% Fair working conditions (Art. 31) Qualified candidates systematically excluded High Medium System filters 80% of applicants — false negatives carry career consequences Human dignity (Art. 1) Reduction of professional identity to a score Medium High Inherent to any scoring system applied to human candidates Data protection (Art. 8) Inference of protected characteristics from CV data Medium High Name, address, education institution, dates = demographic proxies Freedom of occupation (Art. 15) Algorithmic gatekeeping at scale Medium Medium 26,000 applicants/year processed by a single deployer
What most FRIA guides miss: The risk assessment isn’t about the AI model in isolation. It’s about the AI model at the deployment scale of this specific deployer. A model with a 5% false negative rate is a different risk at 100 applications/year vs 26,000. The FRIA must reflect the deployer’s volume, not the provider’s benchmarks.
FRIA Section 4: Human Oversight Measures
This section answers: What mechanisms ensure a human can intervene, override, or stop the system?
Article 14 requires that high-risk AI systems be “designed and developed in such a way that they can be effectively overseen by natural persons.” For TalentBridge, that means:
Oversight architecture: - Threshold review: All candidates scoring between 40-60 (the “gray zone”) receive mandatory human review — no automatic rejection - Override capability: Recruiters can override any score with documented justification - Escalation path: Candidates who self-report potential bias can trigger a manual re-evaluation - Batch audit: Monthly random sample of 50 rejected candidates reviewed for demographic patterns - Kill switch: System can be disabled per-client or globally within 4 hours of a reported issue
Reviewer qualifications: All staff conducting reviews must complete bias-awareness training (Article 4 AI literacy requirement, enforceable since February 2, 2025). Training records maintained in compliance file.
The key principle: oversight isn’t a dashboard someone checks on Fridays. It’s an architecture that forces human judgment at the decision points where the AI is most likely to be wrong.
[INTERNAL-LINK: Article 14 human oversight requirements → detailed obligation breakdown]
FRIA Section 5: Mitigation Strategies
43% of organizations worldwide used AI for recruitment in 2025 — up from 26% in 2024 (HeroHunt.ai, 2025). Adoption is accelerating. Mitigation frameworks aren’t keeping up.
TalentBridge’s FRIA documents specific countermeasures for each identified risk:
Technical mitigations: - Pre-deployment bias audit using demographic-balanced test set (minimum 200 resumes per protected category) - Quarterly re-testing against updated Brookings methodology - Name and address anonymization in scoring pipeline — model sees skills, experience, qualifications only - Confidence thresholding: scores below 0.7 model confidence flagged for human review regardless of relevance score
Organizational mitigations: - Dedicated AI compliance officer (reports to managing director, not to IT) - Incident reporting procedure: any candidate complaint investigated within 5 business days - Annual external audit by accredited conformity assessment body (DEKRA became first accredited body on March 18, 2026)
DPIA bridge (Article 27(4)): TalentBridge’s existing GDPR DPIA for candidate data processing covers data minimization, retention periods, and subject access rights. These carry forward to the FRIA — no duplication needed. Estimated effort reduction: 30-40%.
FRIA Section 6: Notification and Documentation
This section answers: Who do you tell, and what do you keep on file?
Market surveillance authority notification: TalentBridge submits FRIA results to the Bundesnetzagentur (Germany’s designated AI Act authority, operating via the “AI Service Desk”). Notification occurs before first deployment.
EU database registration: Because TalentBridge deploys on behalf of two municipal employers (public bodies), a summary of the FRIA must be registered in the EU high-risk AI database per Article 27(3).
Documentation retention: FRIA maintained for the lifetime of the AI system plus 10 years. Updated when the system’s intended purpose, deployment context, or affected populations change materially.
Availability: Full FRIA available to the market surveillance authority on request. Summary available to affected individuals who exercise their right to explanation under Article 86.
Short section. Don’t overthink it. The substance is in sections 1-5. This section is procedural — but skip it and the FRIA is incomplete.
Common Mistakes to Avoid
Treating the FRIA as a renamed DPIA. It isn’t. A DPIA covers two rights. A FRIA covers all Charter rights. Copy-pasting your DPIA template leaves five right categories unaddressed — and the auditor will notice.
Completing the FRIA after deployment. Article 27 says “prior to deploying.” Not “during the first month of operation.” Not “when the auditor asks.” Before first use. Full stop.
Ignoring indirect affected populations. The candidates who never get screened are also affected. If RecruitAI processes only applications submitted through one platform, candidates who apply via email or walk-in are excluded from equal consideration. The FRIA must account for exclusion effects — not just the people in the pipeline.
Missing the DPIA bridge. Article 27(4) explicitly says the FRIA “shall complement” an existing DPIA. If you’ve already done one, reuse it. That’s 30-40% of the effort you don’t need to duplicate. Many consultants miss this — building from scratch when they have a head start sitting in the GDPR compliance file.
Waiting for the official template. The AI Office template was mandated by Article 27(5). It hasn’t arrived. August 2026 will arrive first. Build your interim framework from the ECNL guide, the CEDPO questionnaire, or this walkthrough — then adapt when the official version lands.
Frequently Asked Questions
Can I merge my FRIA with my existing GDPR DPIA?
Article 27(4) says the FRIA “shall complement” an existing DPIA — not replace it. In practice, you can produce a single document with a DPIA section and a FRIA extension covering the additional Charter rights. The DPIA bridge reduces effort by an estimated 30-40% (EU AI Act Navigator, 2025). But the FRIA sections covering non-discrimination, dignity, and access to justice must be new content.
What if the AI Office publishes its template after I’ve completed my FRIA?
Update your FRIA to match the new template’s structure if materially different. Your existing assessment work carries forward — the substance doesn’t change because the format does. Organizations that wait for the template risk missing the August 2, 2026 deadline entirely. Only 26.2% have started concrete compliance activities (Deloitte/Civey, 2024).
[INTERNAL-LINK: planning for deadline uncertainty → Digital Omnibus decision framework post]
Do I need a separate FRIA for every client engagement?
Yes — each deployer must complete their own FRIA. If you’re a consultant managing 10 clients who each deploy the same AI system, that’s 10 FRIAs. The system description may be identical, but the deployment context, affected populations, and oversight measures differ per deployer. This is where tooling pays for itself.
What if my AI system processes no personal data?
You still need a FRIA if the system is high-risk and you meet the Article 27(1) deployer criteria. FRIAs cover all Charter rights — not just data protection. An AI system that scores candidates based on anonymized work samples still implicates non-discrimination (Art. 21), human dignity (Art. 1), and freedom of occupation (Art. 15).
How often must the FRIA be updated?
Article 27(1) requires the FRIA to be performed “prior to deploying” the system. There’s no mandated review cycle in the text — but any material change to the system’s purpose, affected population, or deployment context should trigger an update. In practice: review when the provider releases a model update, when you onboard a new client category, or when enforcement guidance changes the interpretation of obligations.
What Comes Next
The official template will arrive eventually. The deadline arrives August 2, 2026 regardless.
This walkthrough covers the substance. The scenario is fictional. The Article 27 requirements are not. Every section maps to a specific obligation in the regulation — and every obligation carries the enforcement weight of EUR 15 million or 3% of global turnover.
If you’re a consultant running FRIA engagements across multiple clients, the math on doing this manually is 20-40 hours per assessment. At three clients, that’s a quarter of your month.
ReguLume decomposes Article 27 into its individual obligations, maps them against your client’s AI systems, and generates the documentation framework. The judgment is yours. The cross-referencing shouldn’t have to be.
[INTERNAL-LINK: how ReguLume maps obligations → product walkthrough or /for-eu-consultants landing page]
Sources cited in this article: EU AI Act Article 27, EU AI Act Annex III, EU AI Act Article 99, Brookings Institution / Wilson & Caliskan (2025), Deloitte Legal Germany / Civey (2024), Fisher Phillips — Workday class action (2025), HeroHunt.ai — AI Recruitment Adoption (2025), EU AI Act Navigator — FRIA vs DPIA (2025), CEDPO FRIA Questionnaire, ECNL FRIA Guide.
Map obligations to your AI systems
ReguLume covers 2,964 obligations across 15 regulations. Score your compliance posture in hours, not months.
Get Started