Most organizations don't build AI from scratch — they buy it, integrate it, or adapt open-source models. This creates vendor and supply chain risks that require AI-specific assessment beyond your standard vendor risk management program.
AI vendors introduce risks that traditional vendor assessments don't cover:
Model provenance — Where did the model come from? Who trained it? What data was used? Open-source models may have unknown training data containing copyrighted, biased, or toxic content.
Data sourcing — How does the vendor obtain training data? Is it properly licensed? Were consent requirements met? Your organization inherits liability for data sourcing violations.
Security practices — How does the vendor secure models, training data, and inference infrastructure? What access controls exist? How are model updates validated?
Ethical practices — Has the vendor tested for bias? Do they have responsible AI commitments? What happens if bias is discovered post-deployment?
Transparency — Can the vendor explain how the model works? Can they provide model cards, data sheets, or audit reports? Lack of transparency is a risk multiplier — you can't assess what you can't see.
Before purchasing or integrating an AI product or service, conduct AI-specific due diligence:
Model documentation — Request model cards describing the model's intended use, training data, performance metrics, known limitations, and ethical considerations.
Security assessment — Evaluate the vendor's AI security posture: access controls, encryption, monitoring, incident response, and vulnerability management specific to AI infrastructure.
Compliance alignment — Verify the vendor's compliance with relevant regulations. If you're subject to EU AI Act, your vendors must support your compliance obligations.
Data handling — Where is your data processed? Is it used for model improvement? Can you opt out? What happens to your data if the contract ends?
Audit rights — Can you audit the vendor's AI practices? Can you commission third-party audits? Audit rights are essential for regulated industries.
Incident notification — What are the vendor's obligations if a security incident, bias event, or model failure occurs? What's the notification timeline?
Standard vendor contracts need AI-specific provisions:
Transparency obligations — Vendor must provide model documentation, performance metrics, and known limitations. Updates to model capabilities or limitations must be communicated proactively.
Audit rights — Right to audit the vendor's AI practices, including bias testing, security controls, and data handling. Include the right to commission independent third-party audits.
Incident notification — Vendor must notify you within a specified timeframe of any security incident, model failure, bias event, or data breach affecting your data or models.
Data ownership and usage — Clarify who owns input data, output data, and fine-tuned models. Restrict the vendor from using your data for model training without explicit consent.
Performance SLAs — AI-specific service levels: model accuracy, latency, availability, and fairness metrics. Define consequences for SLA violations.
Exit provisions — Data portability, model portability (if applicable), transition support, and data deletion confirmation upon contract termination.
Regulatory compliance — Vendor must support your compliance obligations, including providing documentation for regulatory audits and cooperating with regulatory inquiries.
AI supply chains introduce unique attack vectors:
Compromised models — Malicious actors can poison open-source models or inject backdoors that activate under specific conditions. A model that performs normally on standard benchmarks but behaves maliciously on targeted inputs.
Poisoned datasets — Training data from untrusted sources may contain deliberately introduced biases or backdoors. Data poisoning can be subtle and difficult to detect.
Compromised libraries — ML frameworks, data processing libraries, and model serving infrastructure are software supply chain targets. A compromised TensorFlow or PyTorch dependency affects every model built with it.
Model marketplace risks — Models downloaded from public repositories (Hugging Face, GitHub) may be modified versions of legitimate models with injected vulnerabilities.
Mitigation: Verify model provenance, validate training data sources, pin dependency versions, scan for known vulnerabilities, and test models against adversarial inputs before production deployment.
Vendor risk doesn't end at contract signing. Ongoing monitoring includes:
Reassessment cadence — Annual comprehensive reassessment at minimum. High-risk AI vendors should be reassessed semi-annually or when significant changes occur.
Performance monitoring — Track AI-specific performance metrics: accuracy, fairness, latency, availability. Compare against SLAs.
Change notifications — Monitor for vendor changes: model updates, data handling changes, security incidents, ownership changes, and financial stability.
Regulatory changes — Monitor for new regulations that affect your vendor relationship or the vendor's compliance status.
Metrics for vendor risk:
- Time since last vendor assessment
- Number of open vendor risk findings
- Vendor SLA compliance rate
- Vendor incident count and response time
- Vendor financial stability indicators