Yesterday we covered provider obligations. Today we shift to the other roles in the AI value chain: deployers, importers, and distributors — plus the critical concept of when a deployer becomes a provider.
Deployers of high-risk AI systems must:
Use the system properly — Follow the provider's instructions for use. Operate the system within its intended purpose.
Assign human oversight — Ensure individuals assigned to human oversight have the necessary competence, training, and authority.
Monitor performance — Monitor the AI system's operation based on the instructions for use. Report malfunctions and serious incidents to the provider or distributor.
Input data quality — Ensure that input data is relevant and sufficiently representative for the system's intended purpose.
Data protection impact assessment — Conduct a DPIA where required under GDPR Article 35.
Fundamental Rights Impact Assessment (FRIA) — Before deploying a high-risk AI system, deployers that are public bodies (or private entities providing public services) must conduct an assessment of the system's impact on fundamental rights.
Information to affected individuals — When high-risk AI makes decisions about natural persons, inform those individuals that they are subject to AI.
This is a critical exam concept. A deployer becomes a provider when it:
1. Substantially modifies the high-risk AI system (including retraining, fine-tuning with significant changes, modifying intended purpose)
2. Places the system on the market under its own name or trademark
3. Changes the intended purpose of the AI system from what the original provider specified
Why this matters: Organizations that customize off-the-shelf AI products may inadvertently assume provider obligations. Governance frameworks must include a process to assess whether modifications trigger provider status.
Importers (bring non-EU AI products into the EU market) must:
- Verify the provider has conducted conformity assessment
- Verify CE marking and technical documentation exist
- Ensure the provider can be contacted
- Not place a system on the market if they believe it doesn't comply
- Report non-compliance to the provider and relevant authorities
- Affix their name and contact information to the AI system or its packaging
Distributors (make AI systems available in the supply chain) must:
- Verify CE marking, declaration of conformity, and required documentation
- Not make a system available if they believe it doesn't comply
- Ensure storage and transport conditions don't jeopardize compliance
- Report non-compliance to the provider/importer and authorities
The EU AI Act includes specific obligations for GPAI models (foundation models like GPT-4, Claude, Gemini):
All GPAI model providers must:
- Prepare and maintain technical documentation
- Provide information and documentation to downstream providers integrating the model
- Establish a copyright compliance policy (including compliance with text and data mining opt-outs)
- Publish a sufficiently detailed summary of training data content
GPAI with systemic risk (additional obligations):
- Perform model evaluation including adversarial testing
- Assess and mitigate systemic risks
- Track and report serious incidents
- Ensure adequate cybersecurity protections
- Report energy consumption metrics
A GPAI model is presumed to have systemic risk if its training compute exceeds 10^25 FLOPs or if designated by the European Commission based on other criteria.