EU-AI-Act
Regulation (EU) 2024/1689 — Artificial Intelligence Act
- I. General Provisions
- Art. 1. Subject matter ref
- Art. 2. Scope ref
- Art. 3. Definitions ref
- Art. 4. AI literacy ref
- II. Prohibited AI Practices
- Art. 5. Prohibited artificial intelligence practices ref
- III. High-Risk AI Systems
- Ch. 1 — Classification of AI Systems as High-Risk
- Art. 6. Classification rules for high-risk AI systems (7)
- Art. 7. Amendments to Annex III (12)
- Ch. 2 — Requirements for High-Risk AI Systems
- Art. 8. Compliance with the requirements (5)
- Art. 9. Risk management system (15)
- Art. 10. Data and data governance (20)
- Art. 11. Technical documentation (7)
- Art. 12. Record-keeping (8)
- Art. 13. Transparency and provision of information to deployers (14)
- Art. 14. Human oversight (11)
- Art. 15. Accuracy, robustness and cybersecurity (9)
- Ch. 3 — Obligations of Providers and Deployers of High-Risk AI Systems and Other Parties
- Art. 16. Obligations of providers of high-risk AI systems (12)
- Art. 17. Quality management system (16)
- Art. 18. Documentation keeping (6)
- Art. 19. Automatically generated logs (2)
- Art. 20. Corrective actions and duty of information (5)
- Art. 21. Cooperation with competent authorities (3)
- Art. 22. Duty of providers of high-risk AI systems to inform (2)
- Art. 23. Obligations of importers (12)
- Art. 24. Obligations of distributors (10)
- Art. 25. Responsibilities along the AI value chain (9)
- Ch. 4 — Obligations of Deployers of High-Risk AI Systems
- Art. 26. Obligations of deployers of high-risk AI systems (17)
- Art. 27. Fundamental rights impact assessment for high-risk AI systems (10)
- Ch. 5 — Notifying Authorities and Notified Bodies
- Art. 28. Notifying authorities (8)
- IV. Transparency Obligations for Providers and Deployers of Certain AI Systems
- Art. 50. Transparency obligations for providers and deployers of certain AI systems (9)
- V. General-Purpose AI Models
- Ch. 1 — Classification Rules
- Art. 51. Classification of general-purpose AI models as general-purpose AI models with systemic risk (4)
- Ch. 2 — Obligations for Providers of General-Purpose AI Models
- Art. 53. Obligations for providers of general-purpose AI models (6)
- Art. 54. Authorised representatives of providers of general-purpose AI models (11)
- Art. 55. Obligations for providers of general-purpose AI models with systemic risk (6)
- Art. 56. Codes of practice (8)
- VIII. Post-Market Monitoring, Information Sharing and Market Surveillance
- Ch. 1 — Post-Market Monitoring
- Art. 72. Post-market monitoring by providers and post-market monitoring plan for high-risk AI systems (7)
- Ch. 2 — Sharing of Information on Serious Incidents
- Art. 73. Reporting of serious incidents (12)
- X. Codes of Conduct and Guidelines
- Art. 95. Codes of conduct for voluntary application of specific requirements (6)
- XII. Penalties
- Art. 99. Penalties (8)
- Art. 100. Administrative fines on Union institutions, bodies, offices and agencies (7)
- Art. 101. Penalties for providers of general-purpose AI models (4)
- Annex I. Union Harmonisation Legislation Listed in Article 6(1)
- Annex III. High-Risk AI Systems Referred to in Article 6(2)
- Annex IV. Technical Documentation Referred to in Article 11(1)
Title I — General Provisions
Title II — Prohibited AI Practices
Title III — High-Risk AI Systems
Chapter 1 — Classification of AI Systems as High-Risk
Chapter 2 — Requirements for High-Risk AI Systems
Chapter 3 — Obligations of Providers and Deployers of High-Risk AI Systems and Other Parties
Chapter 4 — Obligations of Deployers of High-Risk AI Systems
Article 26. Obligations of deployers of high-risk AI systems
16 obligations
EU-AIA-26-02
Human Oversight
Assign qualified human oversight
Deployers must assign human oversight to natural persons who have the necessary competence, training and authority, as w
EU-AIA-26-03
Data Governance
Ensure input data quality when controlling input data
To the extent the deployer exercises control over the input data, that deployer must ensure that input data is relevant
EU-AIA-26-04
Monitoring
Monitor AI system operation
Deployers must monitor the operation of the high-risk AI system on the basis of the instructions for use.
EU-AIA-26-05
Reporting
Inform providers of relevant issues
Where relevant, deployers must inform providers in accordance with Article 72 about issues arising from monitoring the A
EU-AIA-26-06
Reporting
Report risks and suspend system use
Where deployers have reason to consider that the use of the high-risk AI system in accordance with the instructions for
EU-AIA-26-07
Reporting
Report serious incidents immediately
Where deployers have identified a serious incident, they must immediately inform first the provider, and then the import
EU-AIA-26-08
Documentation
Keep system logs for minimum 6 months
Deployers must keep the logs automatically generated by the high-risk AI system to the extent such logs are under their
EU-AIA-26-09
Transparency
Inform workers about AI system use
Before putting into service or using a high-risk AI system at the workplace, deployers who are employers must inform wor
EU-AIA-26-10
Registration
Public authority registration compliance
Deployers of high-risk AI systems that are public authorities, or Union institutions, bodies, offices or agencies, must
EU-AIA-26-11
Requirement
Verify system registration before use
When public authority deployers find that the high-risk AI system that they envisage using has not been registered in th
EU-AIA-26-12
Data Governance
Use provider information for DPIA compliance
Where applicable, deployers must use the information provided under Article 13 to comply with their obligation to carry
EU-AIA-26-13
Requirement
Request authorization for post-remote biometric identification
In criminal investigations, deployers of high-risk AI systems for post-remote biometric identification must request auth
EU-AIA-26-14
Requirement
Limit biometric system use to necessary scope
Each use of post-remote biometric identification systems must be limited to what is strictly necessary for the investiga
EU-AIA-26-15
Requirement
Stop unauthorized biometric system use and delete data
If authorization for post-remote biometric identification is rejected, deployers must stop the use of the system with im
EU-AIA-26-16
Transparency
Inform natural persons of AI system use
Deployers of high-risk AI systems referred to in Annex III that make decisions or assist in making decisions related to
EU-AIA-26-17
Requirement
Cooperate with national competent authorities
Deployers must cooperate with the relevant national competent authorities in any action those authorities take in relati
Article 27. Fundamental rights impact assessment for high-risk AI systems
9 obligations
EU-AIA-27-01
Risk Management
Perform fundamental rights impact assessment before deployment
Prior to deploying a high-risk AI system, specified deployers must perform an assessment of the impact on fundamental ri
EU-AIA-27-02
Documentation
Describe deployer processes using AI system
As part of the fundamental rights impact assessment, deployers must provide a description of the deployer's processes in
EU-AIA-27-03
Documentation
Describe time period and frequency of AI system use
As part of the fundamental rights impact assessment, deployers must provide a description of the period of time within w
EU-AIA-27-04
Risk Management
Identify categories of persons affected by AI system use
As part of the fundamental rights impact assessment, deployers must identify the categories of natural persons and group
EU-AIA-27-05
Risk Management
Assess specific risks of harm to affected persons
As part of the fundamental rights impact assessment, deployers must assess the specific risks of harm likely to impact t
EU-AIA-27-06
Human Oversight
Describe human oversight implementation measures
As part of the fundamental rights impact assessment, deployers must provide a description of the implementation of human
EU-AIA-27-07
Risk Management
Define measures for when risks materialize
As part of the fundamental rights impact assessment, deployers must define the measures to be taken when identified risk
EU-AIA-27-08
Monitoring
Update fundamental rights impact assessment when elements change
If during the use of the AI system, the deployer considers that any assessment elements have changed or are no longer up
EU-AIA-27-09
Reporting
Notify market surveillance authority of assessment results
Once the fundamental rights impact assessment has been performed, the deployer must notify the market surveillance autho