Providers of high-risk AI systems face the most extensive obligations under the EU AI Act. The AIGP exam tests your knowledge of each major requirement. Let's go through them article by article.
Providers must establish a continuous, iterative risk management system throughout the AI system's lifecycle. This includes:
- Identification and analysis of known and reasonably foreseeable risks
- Estimation and evaluation of risks arising from intended use and reasonably foreseeable misuse
- Risk mitigation measures adopted based on assessment results
- Testing to ensure risk management measures are effective
The risk management system must consider risks to health, safety, and fundamental rights. It must be documented and updated throughout the system's lifecycle.
Training, validation, and testing datasets must meet specific quality criteria:
- Relevant, sufficiently representative, and as free of errors as possible
- Appropriate statistical properties for the intended geographic, behavioral, or functional setting
- Bias examination — datasets must be examined for possible biases
- Gaps and shortcomings must be addressed through appropriate data governance measures
For high-risk AI using personal data: data governance practices must ensure compliance with data protection law, including purpose limitation and data minimization.
Article 11 (Technical Documentation) — Must be drawn up before market placement and kept up to date. Contents specified in Annex IV:
- General description of the AI system
- Detailed description of development process
- Monitoring, functioning, and control information
- Risk management documentation
- Changes throughout the lifecycle
Article 12 (Record-Keeping) — High-risk AI systems must have automatic logging capabilities. Logs must record events relevant to identifying risks, enable post-market monitoring, and facilitate traceability.
Article 13 (Transparency) — Providers must ensure high-risk AI systems are designed to be sufficiently transparent to enable deployers to interpret output and use it appropriately. Instructions for use must include the provider's identity, system characteristics, performance metrics, known limitations, and human oversight measures.
Article 14 (Human Oversight) — High-risk AI systems must be designed to allow effective oversight by natural persons during use, including the ability to:
- Understand the system's capabilities and limitations
- Monitor the system's operation
- Interpret the system's output correctly
- Decide not to use the system or override/reverse its output
- Intervene or stop the system ("stop button")
Article 15 (Accuracy, Robustness, Cybersecurity) — High-risk AI systems must achieve appropriate levels of:
- Accuracy — consistent with the intended purpose
- Robustness — resilient to errors, faults, and attempted manipulations
- Cybersecurity — protected against unauthorized access and adversarial attacks
Before placing a high-risk AI system on the market, providers must undergo a conformity assessment to demonstrate compliance with all requirements.
Two assessment procedures:
1. Internal conformity assessment (most high-risk AI) — The provider self-assesses compliance. Requires a quality management system and technical documentation review.
2. Third-party conformity assessment (biometric identification and critical infrastructure AI) — Requires assessment by a notified body (independent third-party assessor).
After successful conformity assessment:
- Provider issues an EU declaration of conformity
- Provider affixes the CE marking to the AI system
- System is registered in the EU database for high-risk AI systems