All Lessons Course Details All Courses Enroll
Courses/ AIGP Certification Prep/ Day 11
Day 11 of 30

The EU AI Act — Structure, Risk Tiers, and Key Definitions

⏱ 20 min 📊 Medium AIGP Certification Prep

The EU AI Act is the world's first comprehensive AI regulation. It's heavily tested on the AIGP exam — expect questions on risk classification, roles, obligations, and enforcement. We'll cover it in four lessons (Days 11–14).

EU AI Act risk classification pyramid showing four tiers
The EU AI Act uses a risk-based approach — obligations increase with the level of risk.

The Risk-Based Framework

The EU AI Act classifies AI systems into four risk tiers:

Unacceptable Risk (BANNED) — AI practices prohibited entirely under Article 5:

- Social scoring by public authorities

- Real-time remote biometric identification in public spaces (with limited exceptions)

- Subliminal manipulation that causes harm

- Exploitation of vulnerabilities of specific groups (age, disability)

- Emotion recognition in workplaces and education (with exceptions)

- Untargeted scraping for facial recognition databases

- Predictive policing based solely on profiling

High Risk — AI systems subject to strict requirements before market placement. Two categories:

- AI that is a safety component of a product covered by EU harmonization legislation (medical devices, machinery, vehicles)

- Standalone AI systems listed in Annex III (biometric identification, critical infrastructure, education, employment, essential services, law enforcement, migration, justice)

Limited Risk — AI systems with specific transparency obligations: chatbots, deepfakes, emotion recognition, biometric categorization. Users must be informed they're interacting with AI or viewing AI-generated content.

Minimal Risk — All other AI systems. No specific obligations under the Act (but general consumer protection and existing laws still apply).

Knowledge Check
An AI system used to evaluate creditworthiness of individuals falls into which EU AI Act risk category?
AI systems used to evaluate creditworthiness are explicitly listed in Annex III as high-risk AI systems. They assess individuals' access to essential financial services, placing them in the high-risk category with associated compliance obligations.

Key Definitions and Roles

The EU AI Act defines specific roles in the AI value chain:

Provider — The entity that develops or has an AI system developed and places it on the market or puts it into service under its own name or trademark. The provider bears the heaviest obligations.

Deployer — Any entity that uses an AI system under its authority (except for personal, non-professional use). Formerly called "user" in earlier drafts.

Importer — A person or entity in the EU that places an AI system from a non-EU provider on the market.

Distributor — Any entity in the supply chain (other than provider or importer) that makes an AI system available on the EU market.

Authorized representative — A person or entity in the EU designated by a non-EU provider to act on its behalf.

Exam tip: The distinction between provider and deployer is critical. Providers have development-phase obligations. Deployers have deployment-phase obligations. A deployer who substantially modifies a high-risk AI system becomes a provider.

Knowledge Check
A US-based company develops an AI hiring tool and licenses it to a European recruitment firm. Under the EU AI Act, who is the "provider" and who is the "deployer"?
The US company developed the AI system and placed it on the market — it's the provider. The European firm uses the system under its authority — it's the deployer. The EU AI Act applies regardless of where the provider is based, as long as the AI system is placed on the EU market or affects persons in the EU.

EU AI Act Timeline

Key enforcement dates:

- August 2024 — EU AI Act enters into force

- February 2025 — Prohibited AI practices enforceable

- August 2025 — GPAI model provisions apply; governance structure established

- August 2026 — Most provisions apply, including high-risk AI in Annex III

- August 2027 — High-risk AI that is a safety component of regulated products

Organizations need governance programs in place now to meet these deadlines.

Annex III — High-Risk AI System Categories

Know these categories for the exam:

1. Biometrics — Remote biometric identification, biometric categorization, emotion recognition

2. Critical infrastructure — AI managing safety of critical digital infrastructure, road traffic, water, gas, heating, electricity

3. Education and training — AI determining access to education, evaluating learning outcomes, detecting prohibited behavior during exams

4. Employment — AI for recruitment, job application filtering, performance evaluation, promotion decisions

5. Essential services — Credit scoring, life/health insurance risk assessment, social assistance eligibility

6. Law enforcement — Risk assessment of individuals, polygraphs, evidence evaluation, profiling during criminal investigations

7. Migration and border control — Risk assessment for irregular migration, visa and residence permit applications

8. Justice and democratic processes — AI assisting judicial authorities in fact-finding and law application

Final Check
An AI system categorizes customer service calls by emotion to route angry customers to senior agents. Under the EU AI Act, this system:
Emotion recognition systems fall into the "limited risk" category with transparency obligations. Users must be informed they are subject to emotion recognition. It is not prohibited outright (workplace and education restrictions have exceptions, and customer service routing is neither). It's not high-risk unless it falls into an Annex III category.
🎯
Day 11 Complete
"The EU AI Act uses four risk tiers: unacceptable (banned), high risk (strict requirements), limited risk (transparency), and minimal risk (no specific rules). Know Annex III categories — they define what's high-risk."
Next Lesson
The EU AI Act — Provider Obligations for High-Risk AI