The EU AI Act is the world's first comprehensive AI regulation. It's heavily tested on the AIGP exam — expect questions on risk classification, roles, obligations, and enforcement. We'll cover it in four lessons (Days 11–14).
The EU AI Act classifies AI systems into four risk tiers:
Unacceptable Risk (BANNED) — AI practices prohibited entirely under Article 5:
- Social scoring by public authorities
- Real-time remote biometric identification in public spaces (with limited exceptions)
- Subliminal manipulation that causes harm
- Exploitation of vulnerabilities of specific groups (age, disability)
- Emotion recognition in workplaces and education (with exceptions)
- Untargeted scraping for facial recognition databases
- Predictive policing based solely on profiling
High Risk — AI systems subject to strict requirements before market placement. Two categories:
- AI that is a safety component of a product covered by EU harmonization legislation (medical devices, machinery, vehicles)
- Standalone AI systems listed in Annex III (biometric identification, critical infrastructure, education, employment, essential services, law enforcement, migration, justice)
Limited Risk — AI systems with specific transparency obligations: chatbots, deepfakes, emotion recognition, biometric categorization. Users must be informed they're interacting with AI or viewing AI-generated content.
Minimal Risk — All other AI systems. No specific obligations under the Act (but general consumer protection and existing laws still apply).
The EU AI Act defines specific roles in the AI value chain:
Provider — The entity that develops or has an AI system developed and places it on the market or puts it into service under its own name or trademark. The provider bears the heaviest obligations.
Deployer — Any entity that uses an AI system under its authority (except for personal, non-professional use). Formerly called "user" in earlier drafts.
Importer — A person or entity in the EU that places an AI system from a non-EU provider on the market.
Distributor — Any entity in the supply chain (other than provider or importer) that makes an AI system available on the EU market.
Authorized representative — A person or entity in the EU designated by a non-EU provider to act on its behalf.
Exam tip: The distinction between provider and deployer is critical. Providers have development-phase obligations. Deployers have deployment-phase obligations. A deployer who substantially modifies a high-risk AI system becomes a provider.
Key enforcement dates:
- August 2024 — EU AI Act enters into force
- February 2025 — Prohibited AI practices enforceable
- August 2025 — GPAI model provisions apply; governance structure established
- August 2026 — Most provisions apply, including high-risk AI in Annex III
- August 2027 — High-risk AI that is a safety component of regulated products
Organizations need governance programs in place now to meet these deadlines.
Know these categories for the exam:
1. Biometrics — Remote biometric identification, biometric categorization, emotion recognition
2. Critical infrastructure — AI managing safety of critical digital infrastructure, road traffic, water, gas, heating, electricity
3. Education and training — AI determining access to education, evaluating learning outcomes, detecting prohibited behavior during exams
4. Employment — AI for recruitment, job application filtering, performance evaluation, promotion decisions
5. Essential services — Credit scoring, life/health insurance risk assessment, social assistance eligibility
6. Law enforcement — Risk assessment of individuals, polygraphs, evidence evaluation, profiling during criminal investigations
7. Migration and border control — Risk assessment for irregular migration, visa and residence permit applications
8. Justice and democratic processes — AI assisting judicial authorities in fact-finding and law application