All Lessons Course Details All Courses Enroll
Courses/ ISACA AAISM Certification Prep/ Day 9
Day 9 of 18

AI Vendor and Supply Chain Risk

⏱ 18 min 📊 Advanced ISACA AAISM Certification Prep

Most organizations don't build AI from scratch — they buy it, integrate it, or adapt open-source models. This creates vendor and supply chain risks that require AI-specific assessment beyond your standard vendor risk management program.

AI vendor risk categories

AI vendors introduce risks that traditional vendor assessments don't cover:

Model provenance — Where did the model come from? Who trained it? What data was used? Open-source models may have unknown training data containing copyrighted, biased, or toxic content.

Data sourcing — How does the vendor obtain training data? Is it properly licensed? Were consent requirements met? Your organization inherits liability for data sourcing violations.

Security practices — How does the vendor secure models, training data, and inference infrastructure? What access controls exist? How are model updates validated?

Ethical practices — Has the vendor tested for bias? Do they have responsible AI commitments? What happens if bias is discovered post-deployment?

Transparency — Can the vendor explain how the model works? Can they provide model cards, data sheets, or audit reports? Lack of transparency is a risk multiplier — you can't assess what you can't see.

AI vendor risk assessment checklist covering eight essential areas from model provenance to exit strategy
Use this checklist for every AI vendor assessment. Reassess annually and on significant changes.

Due diligence before procurement

Before purchasing or integrating an AI product or service, conduct AI-specific due diligence:

Model documentation — Request model cards describing the model's intended use, training data, performance metrics, known limitations, and ethical considerations.

Security assessment — Evaluate the vendor's AI security posture: access controls, encryption, monitoring, incident response, and vulnerability management specific to AI infrastructure.

Compliance alignment — Verify the vendor's compliance with relevant regulations. If you're subject to EU AI Act, your vendors must support your compliance obligations.

Data handling — Where is your data processed? Is it used for model improvement? Can you opt out? What happens to your data if the contract ends?

Audit rights — Can you audit the vendor's AI practices? Can you commission third-party audits? Audit rights are essential for regulated industries.

Incident notification — What are the vendor's obligations if a security incident, bias event, or model failure occurs? What's the notification timeline?

Knowledge Check
Your organization is evaluating an AI vendor for customer service automation. The vendor refuses to share model documentation or training data details, citing intellectual property protection. What is the MOST appropriate response?
**Balanced approach.** Complete transparency isn't always possible, but complete opacity isn't acceptable either. Negotiate for practical alternatives (model cards, audits, testing) and make a **risk-informed decision** about whether the remaining opacity is acceptable given the use case risk level.

AI-specific contract provisions

Standard vendor contracts need AI-specific provisions:

Transparency obligations — Vendor must provide model documentation, performance metrics, and known limitations. Updates to model capabilities or limitations must be communicated proactively.

Audit rights — Right to audit the vendor's AI practices, including bias testing, security controls, and data handling. Include the right to commission independent third-party audits.

Incident notification — Vendor must notify you within a specified timeframe of any security incident, model failure, bias event, or data breach affecting your data or models.

Data ownership and usage — Clarify who owns input data, output data, and fine-tuned models. Restrict the vendor from using your data for model training without explicit consent.

Performance SLAs — AI-specific service levels: model accuracy, latency, availability, and fairness metrics. Define consequences for SLA violations.

Exit provisions — Data portability, model portability (if applicable), transition support, and data deletion confirmation upon contract termination.

Regulatory compliance — Vendor must support your compliance obligations, including providing documentation for regulatory audits and cooperating with regulatory inquiries.

Supply chain attacks on AI

AI supply chains introduce unique attack vectors:

Compromised models — Malicious actors can poison open-source models or inject backdoors that activate under specific conditions. A model that performs normally on standard benchmarks but behaves maliciously on targeted inputs.

Poisoned datasets — Training data from untrusted sources may contain deliberately introduced biases or backdoors. Data poisoning can be subtle and difficult to detect.

Compromised libraries — ML frameworks, data processing libraries, and model serving infrastructure are software supply chain targets. A compromised TensorFlow or PyTorch dependency affects every model built with it.

Model marketplace risks — Models downloaded from public repositories (Hugging Face, GitHub) may be modified versions of legitimate models with injected vulnerabilities.

Mitigation: Verify model provenance, validate training data sources, pin dependency versions, scan for known vulnerabilities, and test models against adversarial inputs before production deployment.

Knowledge Check
An engineering team wants to download and fine-tune an open-source model from a public repository for a customer-facing application. What is the PRIMARY supply chain risk?
The primary supply chain risk is **provenance and integrity.** You don't know who modified the model last, what data it was trained on, or whether it contains backdoors. License, performance, and dependency are valid concerns but secondary to the fundamental question of whether the model is trustworthy.

Ongoing vendor monitoring

Vendor risk doesn't end at contract signing. Ongoing monitoring includes:

Reassessment cadence — Annual comprehensive reassessment at minimum. High-risk AI vendors should be reassessed semi-annually or when significant changes occur.

Performance monitoring — Track AI-specific performance metrics: accuracy, fairness, latency, availability. Compare against SLAs.

Change notifications — Monitor for vendor changes: model updates, data handling changes, security incidents, ownership changes, and financial stability.

Regulatory changes — Monitor for new regulations that affect your vendor relationship or the vendor's compliance status.

Metrics for vendor risk:

- Time since last vendor assessment

- Number of open vendor risk findings

- Vendor SLA compliance rate

- Vendor incident count and response time

- Vendor financial stability indicators

Final Check
An AI vendor notifies you that they've updated their model to improve performance. The update also changes how the model handles certain edge cases. Your contract requires 30 days advance notice for model changes. What is the MOST appropriate response?
**Governance process applies.** Model updates are changes that require governance review. Test the changes, evaluate against your thresholds (especially the edge case behavior changes), and follow your approval process. Contract compliance is a separate issue that legal can address in parallel.
🔗
Day 9 Complete
"You can outsource AI capability but not AI accountability. Vendor risk assessment must cover provenance, transparency, and supply chain integrity."
Next Lesson
Leveraging AI as a Security Opportunity