EU-AI-Act
Regulation (EU) 2024/1689 — Artificial Intelligence Act
- I. General Provisions
- Art. 1. Subject matter ref
- Art. 2. Scope ref
- Art. 3. Definitions ref
- Art. 4. AI literacy ref
- II. Prohibited AI Practices
- Art. 5. Prohibited artificial intelligence practices ref
- III. High-Risk AI Systems
- Ch. 1 — Classification of AI Systems as High-Risk
- Art. 6. Classification rules for high-risk AI systems (7)
- Art. 7. Amendments to Annex III (12)
- Ch. 2 — Requirements for High-Risk AI Systems
- Art. 8. Compliance with the requirements (5)
- Art. 9. Risk management system (15)
- Art. 10. Data and data governance (20)
- Art. 11. Technical documentation (7)
- Art. 12. Record-keeping (8)
- Art. 13. Transparency and provision of information to deployers (14)
- Art. 14. Human oversight (11)
- Art. 15. Accuracy, robustness and cybersecurity (9)
- Ch. 3 — Obligations of Providers and Deployers of High-Risk AI Systems and Other Parties
- Art. 16. Obligations of providers of high-risk AI systems (12)
- Art. 17. Quality management system (16)
- Art. 18. Documentation keeping (6)
- Art. 19. Automatically generated logs (2)
- Art. 20. Corrective actions and duty of information (5)
- Art. 21. Cooperation with competent authorities (3)
- Art. 22. Duty of providers of high-risk AI systems to inform (2)
- Art. 23. Obligations of importers (12)
- Art. 24. Obligations of distributors (10)
- Art. 25. Responsibilities along the AI value chain (9)
- Ch. 4 — Obligations of Deployers of High-Risk AI Systems
- Art. 26. Obligations of deployers of high-risk AI systems (17)
- Art. 27. Fundamental rights impact assessment for high-risk AI systems (10)
- Ch. 5 — Notifying Authorities and Notified Bodies
- Art. 28. Notifying authorities (8)
- IV. Transparency Obligations for Providers and Deployers of Certain AI Systems
- Art. 50. Transparency obligations for providers and deployers of certain AI systems (9)
- V. General-Purpose AI Models
- Ch. 1 — Classification Rules
- Art. 51. Classification of general-purpose AI models as general-purpose AI models with systemic risk (4)
- Ch. 2 — Obligations for Providers of General-Purpose AI Models
- Art. 53. Obligations for providers of general-purpose AI models (6)
- Art. 54. Authorised representatives of providers of general-purpose AI models (11)
- Art. 55. Obligations for providers of general-purpose AI models with systemic risk (6)
- Art. 56. Codes of practice (8)
- VIII. Post-Market Monitoring, Information Sharing and Market Surveillance
- Ch. 1 — Post-Market Monitoring
- Art. 72. Post-market monitoring by providers and post-market monitoring plan for high-risk AI systems (7)
- Ch. 2 — Sharing of Information on Serious Incidents
- Art. 73. Reporting of serious incidents (12)
- X. Codes of Conduct and Guidelines
- Art. 95. Codes of conduct for voluntary application of specific requirements (6)
- XII. Penalties
- Art. 99. Penalties (8)
- Art. 100. Administrative fines on Union institutions, bodies, offices and agencies (7)
- Art. 101. Penalties for providers of general-purpose AI models (4)
- Annex I. Union Harmonisation Legislation Listed in Article 6(1)
- Annex III. High-Risk AI Systems Referred to in Article 6(2)
- Annex IV. Technical Documentation Referred to in Article 11(1)
Title I — General Provisions
Title II — Prohibited AI Practices
Title III — High-Risk AI Systems
Chapter 1 — Classification of AI Systems as High-Risk
Chapter 2 — Requirements for High-Risk AI Systems
Article 14. Human oversight
5 obligations
EU-AIA-14-07
Human Oversight
Enable awareness of automation bias
The system must enable natural persons to remain aware of the possible tendency of automatically relying on or over-rely
EU-AIA-14-08
Human Oversight
Enable correct interpretation of system output
The system must enable natural persons to correctly interpret the high-risk AI system's output, taking into account inte
EU-AIA-14-09
Human Oversight
Enable decision not to use or disregard system output
The system must enable natural persons to decide, in any particular situation, not to use the high-risk AI system or to
EU-AIA-14-10
Human Oversight
Enable intervention and system interruption
The system must enable natural persons to intervene in the operation of the high-risk AI system or interrupt the system
EU-AIA-14-11
Human Oversight
Require dual human verification for biometric identification systems
For high-risk AI systems used for biometric identification (Annex III point 1(a)), deployers must ensure that no action
Article 15. Accuracy, robustness and cybersecurity
9 obligations
EU-AIA-15-01
Requirement
Design and develop for appropriate accuracy, robustness and cybersecurity
High-risk AI systems must be designed and developed to achieve an appropriate level of accuracy, robustness and cybersec
EU-AIA-15-02
Transparency
Declare accuracy levels and metrics in instructions
The levels of accuracy and the relevant accuracy metrics of high-risk AI systems must be declared in the accompanying in
EU-AIA-15-03
Requirement
Ensure resilience to errors, faults and inconsistencies
High-risk AI systems must be as resilient as possible regarding errors, faults or inconsistencies that may occur within
EU-AIA-15-04
Requirement
Implement technical and organisational resilience measures
Technical and organisational measures must be taken to ensure high-risk AI systems are resilient regarding errors, fault
EU-AIA-15-05
Requirement
Eliminate or reduce biased feedback loops in learning systems
High-risk AI systems that continue to learn after being placed on the market or put into service must be developed to el
EU-AIA-15-06
Requirement
Address feedback loops with appropriate mitigation measures
High-risk AI systems that continue to learn must ensure that any feedback loops are duly addressed with appropriate miti
EU-AIA-15-07
Requirement
Ensure resilience against unauthorized third-party alteration
High-risk AI systems must be resilient against attempts by unauthorised third parties to alter their use, outputs or per
EU-AIA-15-08
Requirement
Implement appropriate cybersecurity technical solutions
Technical solutions aiming to ensure the cybersecurity of high-risk AI systems must be appropriate to the relevant circu
EU-AIA-15-09
Requirement
Address AI-specific vulnerabilities with technical solutions
Technical solutions to address AI-specific vulnerabilities must include, where appropriate, measures to prevent, detect,
Chapter 3 — Obligations of Providers and Deployers of High-Risk AI Systems and Other Parties
Article 16. Obligations of providers of high-risk AI systems
11 obligations
EU-AIA-16-01
Requirement
Ensure compliance with Chapter 2 requirements
Providers must ensure their high-risk AI systems comply with all requirements set out in Chapter 2 of Title III of the E
EU-AIA-16-02
Transparency
Provider identification marking obligation
Providers must indicate their name, registered trade name or registered trade mark, and contact address on the AI system
EU-AIA-16-03
Risk Management
Quality management system establishment
Providers must establish and maintain a quality management system that complies with the requirements specified in Artic
EU-AIA-16-04
Documentation
Technical documentation retention
Providers must keep and maintain the technical documentation as specified in Article 18.
EU-AIA-16-05
Monitoring
Automatic log retention
Providers must keep the logs automatically generated by their high-risk AI systems as specified in Article 19, when such
EU-AIA-16-06
Conformity
Pre-market conformity assessment
Providers must ensure their high-risk AI system undergoes the relevant conformity assessment procedure as specified in A
EU-AIA-16-07
Conformity
EU declaration of conformity preparation
Providers must draw up an EU declaration of conformity in accordance with the requirements specified in Article 47.
EU-AIA-16-08
Conformity
CE marking affixing
Providers must affix the CE marking to the high-risk AI system or, where not possible, on its packaging or accompanying
EU-AIA-16-09
Registration
Registration obligation compliance
Providers must comply with the registration obligations as specified in Article 49(1).
EU-AIA-16-10
Reporting
Corrective actions and information provision
Providers must take necessary corrective actions and provide information as required in Article 20.
EU-AIA-16-11
Transparency
Conformity demonstration upon authority request
Providers must demonstrate the conformity of their high-risk AI system with Chapter 2 requirements upon a reasoned reque