All Lessons Course Details All Courses Enroll
Courses/ AIGP Certification Prep/ Day 14
Day 14 of 30

The EU AI Act — Transparency, GPAI, and Enforcement

⏱ 18 min 📊 Advanced AIGP Certification Prep

This is our fourth and final EU AI Act lesson. Today we cover transparency obligations for limited-risk AI, deeper detail on GPAI with systemic risk, and the enforcement framework — including penalties that can reach 7% of global turnover.

Limited-Risk Transparency Requirements (Article 50)

AI systems classified as limited risk must meet specific transparency obligations:

AI-generated content:

- Content that is image, audio, or video and constitutes a deepfake must be labeled as artificially generated or manipulated

- AI-generated text published to inform the public on matters of public interest must be labeled as AI-generated

- Exception: AI content that undergoes substantial human review and where a human has editorial responsibility

Chatbots and conversational AI:

- Users must be informed that they are interacting with an AI system

- The notification must occur before the first interaction or at the moment of exposure

Emotion recognition and biometric categorization:

- Individuals must be informed when emotion recognition or biometric categorization systems are used

- Must be informed of the type of personal data processed and the purpose

Knowledge Check
A news website uses AI to generate article summaries that appear alongside human-written articles. Under the EU AI Act's transparency requirements, the website must:
AI-generated text published to inform the public on matters of public interest must be labeled as AI-generated. News article summaries fall squarely into this category. This transparency obligation applies regardless of whether the content is a deepfake.

Enforcement and Penalties

The EU AI Act establishes a tiered penalty structure:

Prohibited AI practices — Up to 35 million EUR or 7% of global annual turnover (whichever is higher)

High-risk AI non-compliance — Up to 15 million EUR or 3% of global annual turnover

Incorrect information to authorities — Up to 7.5 million EUR or 1.5% of global annual turnover

SME and startup adjustments — Lower of the two amounts (flat vs. percentage) applies to SMEs and startups

Enforcement bodies:

- EU AI Office — Enforces GPAI model obligations at EU level

- National competent authorities — Enforce most provisions at member state level

- Market surveillance authorities — Monitor AI products on the market

- National data protection authorities — May enforce AI Act provisions related to fundamental rights

The EU AI Office

The AI Office is a new body within the European Commission responsible for:

- Enforcing rules on GPAI models (both standard and systemic risk)

- Coordinating with national authorities

- Developing guidance and codes of practice

- Monitoring AI market developments and emerging risks

- Managing the EU database of high-risk AI systems

- International cooperation on AI governance

The AI Office has the power to request information from GPAI model providers, conduct evaluations, and impose penalties for non-compliance.

Knowledge Check
A company deploys a high-risk AI system in the EU without completing the required conformity assessment. The maximum penalty is:
Non-compliance with high-risk AI requirements carries a maximum penalty of 15 million EUR or 3% of global annual turnover. The higher tier (35M/7%) is reserved for prohibited AI practices. The lower tier (7.5M/1.5%) is for providing incorrect information to authorities.

EU AI Act — Key Numbers to Remember

For the exam, memorize these numbers:

- 4 risk tiers: Unacceptable, High, Limited, Minimal

- 8 Annex III categories: Biometrics, Infrastructure, Education, Employment, Essential services, Law enforcement, Migration, Justice

- 10^25 FLOPs: Threshold for GPAI systemic risk presumption

- 35M EUR / 7%: Maximum penalty for prohibited practices

- 15M EUR / 3%: Maximum penalty for high-risk AI non-compliance

- 7.5M EUR / 1.5%: Maximum penalty for incorrect information

- Article 5: Prohibited practices

- Articles 9–15: Core provider obligations for high-risk AI

- Article 26: Deployer obligations

- Article 50: Transparency obligations

Final Check
Which body is primarily responsible for enforcing obligations on providers of general-purpose AI models under the EU AI Act?
The EU AI Office, established within the European Commission, is specifically responsible for enforcing GPAI model obligations. National authorities handle most other provisions, but GPAI enforcement is centralized at the EU level through the AI Office.
🎯
Day 14 Complete
"Transparency obligations apply to chatbots, deepfakes, and emotion recognition. Penalties reach 35M EUR or 7% of global turnover for prohibited practices. The EU AI Office enforces GPAI obligations at the EU level."
Next Lesson
NIST AI Risk Management Framework (AI RMF 1.0)