All Lessons Course Details All Courses Enroll
Courses/ AIGP Certification Prep/ Day 13
Day 13 of 30

The EU AI Act — Deployer, Importer, and Distributor Obligations

⏱ 18 min 📊 Advanced AIGP Certification Prep

Yesterday we covered provider obligations. Today we shift to the other roles in the AI value chain: deployers, importers, and distributors — plus the critical concept of when a deployer becomes a provider.

EU AI Act value chain showing Provider, Importer, Distributor, and Deployer roles with obligations
Each role in the AI value chain carries specific obligations. A deployer who substantially modifies the AI becomes a provider.

Deployer Obligations (Article 26)

Deployers of high-risk AI systems must:

Use the system properly — Follow the provider's instructions for use. Operate the system within its intended purpose.

Assign human oversight — Ensure individuals assigned to human oversight have the necessary competence, training, and authority.

Monitor performance — Monitor the AI system's operation based on the instructions for use. Report malfunctions and serious incidents to the provider or distributor.

Input data quality — Ensure that input data is relevant and sufficiently representative for the system's intended purpose.

Data protection impact assessment — Conduct a DPIA where required under GDPR Article 35.

Fundamental Rights Impact Assessment (FRIA) — Before deploying a high-risk AI system, deployers that are public bodies (or private entities providing public services) must conduct an assessment of the system's impact on fundamental rights.

Information to affected individuals — When high-risk AI makes decisions about natural persons, inform those individuals that they are subject to AI.

Knowledge Check
A company deploys a high-risk AI system from a third-party vendor but significantly modifies the model's decision boundaries and retrains it on new data. Under the EU AI Act, the company:
When a deployer makes a SUBSTANTIAL MODIFICATION to a high-risk AI system — such as retraining on new data or changing decision boundaries — the deployer becomes a provider under the EU AI Act and assumes all provider obligations, including risk management, documentation, and conformity assessment.

When a Deployer Becomes a Provider

This is a critical exam concept. A deployer becomes a provider when it:

1. Substantially modifies the high-risk AI system (including retraining, fine-tuning with significant changes, modifying intended purpose)

2. Places the system on the market under its own name or trademark

3. Changes the intended purpose of the AI system from what the original provider specified

Why this matters: Organizations that customize off-the-shelf AI products may inadvertently assume provider obligations. Governance frameworks must include a process to assess whether modifications trigger provider status.

Importer and Distributor Obligations

Importers (bring non-EU AI products into the EU market) must:

- Verify the provider has conducted conformity assessment

- Verify CE marking and technical documentation exist

- Ensure the provider can be contacted

- Not place a system on the market if they believe it doesn't comply

- Report non-compliance to the provider and relevant authorities

- Affix their name and contact information to the AI system or its packaging

Distributors (make AI systems available in the supply chain) must:

- Verify CE marking, declaration of conformity, and required documentation

- Not make a system available if they believe it doesn't comply

- Ensure storage and transport conditions don't jeopardize compliance

- Report non-compliance to the provider/importer and authorities

General-Purpose AI (GPAI) Model Provisions

The EU AI Act includes specific obligations for GPAI models (foundation models like GPT-4, Claude, Gemini):

All GPAI model providers must:

- Prepare and maintain technical documentation

- Provide information and documentation to downstream providers integrating the model

- Establish a copyright compliance policy (including compliance with text and data mining opt-outs)

- Publish a sufficiently detailed summary of training data content

GPAI with systemic risk (additional obligations):

- Perform model evaluation including adversarial testing

- Assess and mitigate systemic risks

- Track and report serious incidents

- Ensure adequate cybersecurity protections

- Report energy consumption metrics

A GPAI model is presumed to have systemic risk if its training compute exceeds 10^25 FLOPs or if designated by the European Commission based on other criteria.

Knowledge Check
A GPAI model provider is required to publish a summary of the content used for training the model. Which obligation category does this fall under?
Publishing a training data content summary is a standard obligation for ALL GPAI model providers, not just those with systemic risk. It's a transparency requirement, not a conformity assessment or deployer obligation.
Final Check
A European hospital deploys a high-risk AI diagnostic tool purchased from a US vendor through a German distributor. Who is responsible for conducting the fundamental rights impact assessment?
The FRIA is a deployer obligation under Article 26. The hospital, as the deployer, must assess the AI system's impact on fundamental rights before deployment. The provider has separate obligations (technical documentation, risk management), but the FRIA sits with the deployer.
🎯
Day 13 Complete
"Deployers must monitor performance, ensure human oversight, and conduct FRIAs. A deployer who substantially modifies a high-risk AI system BECOMES a provider. GPAI models face transparency obligations, with additional requirements for systemic risk models."
Next Lesson
The EU AI Act — Transparency, GPAI, and Enforcement