EU-AI-Act
Regulation (EU) 2024/1689 — Artificial Intelligence Act
- I. General Provisions
- Art. 1. Subject matter ref
- Art. 2. Scope ref
- Art. 3. Definitions ref
- Art. 4. AI literacy ref
- II. Prohibited AI Practices
- Art. 5. Prohibited artificial intelligence practices ref
- III. High-Risk AI Systems
- Ch. 1 — Classification of AI Systems as High-Risk
- Art. 6. Classification rules for high-risk AI systems (7)
- Art. 7. Amendments to Annex III (12)
- Ch. 2 — Requirements for High-Risk AI Systems
- Art. 8. Compliance with the requirements (5)
- Art. 9. Risk management system (15)
- Art. 10. Data and data governance (20)
- Art. 11. Technical documentation (7)
- Art. 12. Record-keeping (8)
- Art. 13. Transparency and provision of information to deployers (14)
- Art. 14. Human oversight (11)
- Art. 15. Accuracy, robustness and cybersecurity (9)
- Ch. 3 — Obligations of Providers and Deployers of High-Risk AI Systems and Other Parties
- Art. 16. Obligations of providers of high-risk AI systems (12)
- Art. 17. Quality management system (16)
- Art. 18. Documentation keeping (6)
- Art. 19. Automatically generated logs (2)
- Art. 20. Corrective actions and duty of information (5)
- Art. 21. Cooperation with competent authorities (3)
- Art. 22. Duty of providers of high-risk AI systems to inform (2)
- Art. 23. Obligations of importers (12)
- Art. 24. Obligations of distributors (10)
- Art. 25. Responsibilities along the AI value chain (9)
- Ch. 4 — Obligations of Deployers of High-Risk AI Systems
- Art. 26. Obligations of deployers of high-risk AI systems (17)
- Art. 27. Fundamental rights impact assessment for high-risk AI systems (10)
- Ch. 5 — Notifying Authorities and Notified Bodies
- Art. 28. Notifying authorities (8)
- IV. Transparency Obligations for Providers and Deployers of Certain AI Systems
- Art. 50. Transparency obligations for providers and deployers of certain AI systems (9)
- V. General-Purpose AI Models
- Ch. 1 — Classification Rules
- Art. 51. Classification of general-purpose AI models as general-purpose AI models with systemic risk (4)
- Ch. 2 — Obligations for Providers of General-Purpose AI Models
- Art. 53. Obligations for providers of general-purpose AI models (6)
- Art. 54. Authorised representatives of providers of general-purpose AI models (11)
- Art. 55. Obligations for providers of general-purpose AI models with systemic risk (6)
- Art. 56. Codes of practice (8)
- VIII. Post-Market Monitoring, Information Sharing and Market Surveillance
- Ch. 1 — Post-Market Monitoring
- Art. 72. Post-market monitoring by providers and post-market monitoring plan for high-risk AI systems (7)
- Ch. 2 — Sharing of Information on Serious Incidents
- Art. 73. Reporting of serious incidents (12)
- X. Codes of Conduct and Guidelines
- Art. 95. Codes of conduct for voluntary application of specific requirements (6)
- XII. Penalties
- Art. 99. Penalties (8)
- Art. 100. Administrative fines on Union institutions, bodies, offices and agencies (7)
- Art. 101. Penalties for providers of general-purpose AI models (4)
- Annex I. Union Harmonisation Legislation Listed in Article 6(1)
- Annex III. High-Risk AI Systems Referred to in Article 6(2)
- Annex IV. Technical Documentation Referred to in Article 11(1)
Human Oversight Obligations
14Title I — General Provisions
Title II — Prohibited AI Practices
Title III — High-Risk AI Systems
Chapter 1 — Classification of AI Systems as High-Risk
Chapter 2 — Requirements for High-Risk AI Systems
Article 13. Transparency and provision of information to deployers
1 obligation
Article 14. Human oversight
11 obligations
EU-AIA-14-01
Human Oversight
Design systems for effective human oversight
High-risk AI systems must be designed and developed with appropriate human-machine interface tools to enable effective o
EU-AIA-14-02
Human Oversight
Implement oversight measures to prevent or minimize risks
Human oversight must aim to prevent or minimize risks to health, safety or fundamental rights that may emerge during int
EU-AIA-14-03
Human Oversight
Ensure oversight measures are commensurate to risks and context
Oversight measures must be proportionate to the risks, level of autonomy and context of use of the high-risk AI system.
EU-AIA-14-04
Human Oversight
Build oversight measures into the system when technically feasible
Providers must identify and build oversight measures into the high-risk AI system when technically feasible, before plac
EU-AIA-14-05
Human Oversight
Identify oversight measures appropriate for deployer implementation
Providers must identify oversight measures that are appropriate to be implemented by the deployer before placing the sys
EU-AIA-14-06
Human Oversight
Enable understanding of system capacities and limitations
The system must enable natural persons assigned to oversight to properly understand the relevant capacities and limitati
EU-AIA-14-07
Human Oversight
Enable awareness of automation bias
The system must enable natural persons to remain aware of the possible tendency of automatically relying on or over-rely
EU-AIA-14-08
Human Oversight
Enable correct interpretation of system output
The system must enable natural persons to correctly interpret the high-risk AI system's output, taking into account inte
EU-AIA-14-09
Human Oversight
Enable decision not to use or disregard system output
The system must enable natural persons to decide, in any particular situation, not to use the high-risk AI system or to
EU-AIA-14-10
Human Oversight
Enable intervention and system interruption
The system must enable natural persons to intervene in the operation of the high-risk AI system or interrupt the system
EU-AIA-14-11
Human Oversight
Require dual human verification for biometric identification systems
For high-risk AI systems used for biometric identification (Annex III point 1(a)), deployers must ensure that no action
Chapter 3 — Obligations of Providers and Deployers of High-Risk AI Systems and Other Parties
Chapter 4 — Obligations of Deployers of High-Risk AI Systems
Article 26. Obligations of deployers of high-risk AI systems
1 obligation
Article 27. Fundamental rights impact assessment for high-risk AI systems
1 obligation