EU-AI-Act
Regulation (EU) 2024/1689 — Artificial Intelligence Act
- I. General Provisions
- Art. 1. Subject matter ref
- Art. 2. Scope ref
- Art. 3. Definitions ref
- Art. 4. AI literacy ref
- II. Prohibited AI Practices
- Art. 5. Prohibited artificial intelligence practices ref
- III. High-Risk AI Systems
- Ch. 1 — Classification of AI Systems as High-Risk
- Art. 6. Classification rules for high-risk AI systems (7)
- Art. 7. Amendments to Annex III (12)
- Ch. 2 — Requirements for High-Risk AI Systems
- Art. 8. Compliance with the requirements (5)
- Art. 9. Risk management system (15)
- Art. 10. Data and data governance (20)
- Art. 11. Technical documentation (7)
- Art. 12. Record-keeping (8)
- Art. 13. Transparency and provision of information to deployers (14)
- Art. 14. Human oversight (11)
- Art. 15. Accuracy, robustness and cybersecurity (9)
- Ch. 3 — Obligations of Providers and Deployers of High-Risk AI Systems and Other Parties
- Art. 16. Obligations of providers of high-risk AI systems (12)
- Art. 17. Quality management system (16)
- Art. 18. Documentation keeping (6)
- Art. 19. Automatically generated logs (2)
- Art. 20. Corrective actions and duty of information (5)
- Art. 21. Cooperation with competent authorities (3)
- Art. 22. Duty of providers of high-risk AI systems to inform (2)
- Art. 23. Obligations of importers (12)
- Art. 24. Obligations of distributors (10)
- Art. 25. Responsibilities along the AI value chain (9)
- Ch. 4 — Obligations of Deployers of High-Risk AI Systems
- Art. 26. Obligations of deployers of high-risk AI systems (17)
- Art. 27. Fundamental rights impact assessment for high-risk AI systems (10)
- Ch. 5 — Notifying Authorities and Notified Bodies
- Art. 28. Notifying authorities (8)
- IV. Transparency Obligations for Providers and Deployers of Certain AI Systems
- Art. 50. Transparency obligations for providers and deployers of certain AI systems (9)
- V. General-Purpose AI Models
- Ch. 1 — Classification Rules
- Art. 51. Classification of general-purpose AI models as general-purpose AI models with systemic risk (4)
- Ch. 2 — Obligations for Providers of General-Purpose AI Models
- Art. 53. Obligations for providers of general-purpose AI models (6)
- Art. 54. Authorised representatives of providers of general-purpose AI models (11)
- Art. 55. Obligations for providers of general-purpose AI models with systemic risk (6)
- Art. 56. Codes of practice (8)
- VIII. Post-Market Monitoring, Information Sharing and Market Surveillance
- Ch. 1 — Post-Market Monitoring
- Art. 72. Post-market monitoring by providers and post-market monitoring plan for high-risk AI systems (7)
- Ch. 2 — Sharing of Information on Serious Incidents
- Art. 73. Reporting of serious incidents (12)
- X. Codes of Conduct and Guidelines
- Art. 95. Codes of conduct for voluntary application of specific requirements (6)
- XII. Penalties
- Art. 99. Penalties (8)
- Art. 100. Administrative fines on Union institutions, bodies, offices and agencies (7)
- Art. 101. Penalties for providers of general-purpose AI models (4)
- Annex I. Union Harmonisation Legislation Listed in Article 6(1)
- Annex III. High-Risk AI Systems Referred to in Article 6(2)
- Annex IV. Technical Documentation Referred to in Article 11(1)
Title I — General Provisions
Title II — Prohibited AI Practices
Title III — High-Risk AI Systems
Chapter 1 — Classification of AI Systems as High-Risk
Chapter 2 — Requirements for High-Risk AI Systems
Article 12. Record-keeping
5 obligations
EU-AIA-12-04
Monitoring
Facilitate post-market monitoring through logging
The logging capabilities must facilitate the post-market monitoring activities referred to in Article 72.
EU-AIA-12-05
Documentation
Record usage periods for biometric identification systems
For high-risk AI systems referred to in point 1(a) of Annex III (biometric identification systems), logging capabilities
EU-AIA-12-06
Documentation
Record reference database information for biometric identification systems
For high-risk AI systems referred to in point 1(a) of Annex III (biometric identification systems), logging capabilities
EU-AIA-12-07
Documentation
Record matching input data for biometric identification systems
For high-risk AI systems referred to in point 1(a) of Annex III (biometric identification systems), logging capabilities
EU-AIA-12-08
Documentation
Record identity of result verification personnel for biometric identification systems
For high-risk AI systems referred to in point 1(a) of Annex III (biometric identification systems), logging capabilities
Article 13. Transparency and provision of information to deployers
14 obligations
EU-AIA-13-01
Transparency
Design systems for transparency to enable output interpretation
High-risk AI systems must be designed and developed to ensure sufficient transparency so that deployers can interpret th
EU-AIA-13-02
Documentation
Provide instructions for use in appropriate digital format
High-risk AI systems must be accompanied by instructions for use in an appropriate digital format or otherwise that incl
EU-AIA-13-03
Transparency
Provide provider identity and contact details
Instructions for use must specify the identity and contact details of the provider and, where applicable, of its authori
EU-AIA-13-04
Transparency
Specify system characteristics, capabilities and performance limitations
Instructions must include the characteristics, capabilities and limitations of performance of the high-risk AI system, i
EU-AIA-13-05
Transparency
Provide accuracy, robustness and cybersecurity performance metrics
Instructions must specify the level of accuracy (including metrics), robustness and cybersecurity as tested and validate
EU-AIA-13-06
Transparency
Disclose known risks to health, safety and fundamental rights
Instructions must include any known or foreseeable circumstances related to intended use or reasonably foreseeable misus
EU-AIA-13-07
Transparency
Document technical capabilities for output explanation
Where applicable, instructions must include the technical capabilities and characteristics of the high-risk AI system to
EU-AIA-13-08
Transparency
Report performance for specific persons or groups when appropriate
When appropriate, instructions must include the system's performance regarding specific persons or groups of persons on
EU-AIA-13-09
Data Governance
Provide input data specifications and training data information
When appropriate, instructions must include specifications for input data or other relevant information about training,
EU-AIA-13-10
Transparency
Provide output interpretation guidance for deployers
Where applicable, instructions must include information to enable deployers to interpret the output of the high-risk AI
EU-AIA-13-11
Transparency
Disclose predetermined system changes and performance impacts
Instructions must include any changes to the high-risk AI system and its performance that have been pre-determined by th
EU-AIA-13-12
Human Oversight
Document human oversight measures and technical interpretation aids
Instructions must include the human oversight measures referred to in Article 14, including technical measures put in pl
EU-AIA-13-13
Documentation
Specify computational resources and maintenance requirements
Instructions must include the computational and hardware resources needed, expected system lifetime, and necessary maint
EU-AIA-13-14
Monitoring
Describe log collection and interpretation mechanisms
Where relevant, instructions must include a description of mechanisms within the high-risk AI system that allow deployer
Article 14. Human oversight
6 obligations
EU-AIA-14-01
Human Oversight
Design systems for effective human oversight
High-risk AI systems must be designed and developed with appropriate human-machine interface tools to enable effective o
EU-AIA-14-02
Human Oversight
Implement oversight measures to prevent or minimize risks
Human oversight must aim to prevent or minimize risks to health, safety or fundamental rights that may emerge during int
EU-AIA-14-03
Human Oversight
Ensure oversight measures are commensurate to risks and context
Oversight measures must be proportionate to the risks, level of autonomy and context of use of the high-risk AI system.
EU-AIA-14-04
Human Oversight
Build oversight measures into the system when technically feasible
Providers must identify and build oversight measures into the high-risk AI system when technically feasible, before plac
EU-AIA-14-05
Human Oversight
Identify oversight measures appropriate for deployer implementation
Providers must identify oversight measures that are appropriate to be implemented by the deployer before placing the sys
EU-AIA-14-06
Human Oversight
Enable understanding of system capacities and limitations
The system must enable natural persons assigned to oversight to properly understand the relevant capacities and limitati