All Lessons Course Details All Courses Enroll
Courses/ AIGP Certification Prep/ Day 20
Day 20 of 30

AI Risk Assessment Methodologies

⏱ 20 min 📊 Advanced AIGP Certification Prep

Risk assessment is the bridge between identifying what could go wrong and deciding what to do about it. Today you'll learn structured methodologies for assessing AI risks — a core AIGP exam topic.

AI Impact Assessments (AIAs)

An AI Impact Assessment is a structured process for evaluating the potential impacts of an AI system before deployment. Think of it as the AI equivalent of an environmental impact assessment.

When to conduct an AIA:

- Before deploying any AI system that affects individuals or groups

- When making significant changes to an existing AI system

- When repurposing an AI system for a new use case

- When regulatory requirements mandate it (EU AI Act, GDPR DPIA)

AIA components:

1. System description — Purpose, functionality, inputs/outputs, intended users

2. Stakeholder identification — Who is affected? Direct users, subjects, and third parties

3. Rights impact — Assessment of impact on fundamental rights (privacy, non-discrimination, due process)

4. Risk identification — What could go wrong? Consider all risk categories from Lesson 2

5. Risk evaluation — Likelihood × severity assessment for each identified risk

6. Mitigation measures — Controls and safeguards to reduce identified risks

7. Residual risk — Risk remaining after mitigation — is it acceptable?

8. Monitoring plan — How will ongoing risks be tracked?

Knowledge Check
An AI Impact Assessment identifies a moderate risk of discriminatory outcomes in a hiring AI tool. After implementing bias testing and human oversight, some residual risk remains. What should the governance team do?
Residual risk must be evaluated against the organization's risk tolerance. If the residual risk is within tolerance, deployment may proceed with monitoring. If it exceeds tolerance, additional mitigation or a no-go decision is needed. The decision and rationale must be documented.

Fundamental Rights Impact Assessments (FRIAs)

The EU AI Act requires deployers of high-risk AI systems (particularly public bodies) to conduct a Fundamental Rights Impact Assessment before deployment.

FRIA focuses on:

- Impact on the right to non-discrimination — Could the AI system discriminate?

- Impact on the right to privacy — What personal data is processed and how?

- Impact on the right to an effective remedy — Can affected individuals challenge AI decisions?

- Impact on freedom of expression — Could the AI system chill speech or limit expression?

- Impact on the right to human dignity — Does the AI system treat people with respect?

The FRIA must be completed before first use of the high-risk AI system and must be sent to the relevant market surveillance authority.

Risk Scoring Frameworks

Two common approaches to risk scoring:

Qualitative risk assessment — Uses descriptive scales:

- Likelihood: Very Low / Low / Medium / High / Very High

- Severity: Negligible / Minor / Moderate / Significant / Critical

- Risk Level: Combination of likelihood and severity (risk matrix)

Quantitative risk assessment — Uses numerical values:

- Probability percentages for likelihood

- Financial or impact values for severity

- Calculated risk scores and expected loss values

Best practice: Use qualitative for initial screening and prioritization, quantitative for high-risk AI systems where data is available.

Add a third dimension for AI: Reversibility — Can the harm be undone?

- A wrongful credit denial can be reversed

- An incorrect medical diagnosis may cause irreversible harm

- Reputational damage from a discriminatory AI system may be permanent

Knowledge Check
When scoring AI risks, why is "reversibility" an important additional dimension beyond likelihood and severity?
Reversibility determines the appropriate governance response. Irreversible harms (medical injury, loss of life, permanent reputation damage) require stronger preventive controls because there's no opportunity for post-hoc remediation. Reversible harms (incorrect billing, denied access) may accept higher risk levels because they can be corrected.

Risk Appetite in AI Deployment Decisions

Connecting risk assessment to deployment decisions:

Risk avoidance — Don't deploy the AI system. Appropriate when risks are unacceptable or the use case is inappropriate.

Risk mitigation — Implement controls to reduce risk to an acceptable level. Most common response for AI governance.

Risk transfer — Shift risk to another party (insurance, contractual allocation to vendors). Limited applicability for AI — you can transfer financial risk but not reputational or ethical risk.

Risk acceptance — Accept the residual risk after mitigation. Must be a documented, deliberate decision by authorized personnel, not a default.

Final Check
An AI risk assessment reveals that a customer service chatbot occasionally provides incorrect product safety information. The incorrect information could lead to physical harm. What risk treatment is MOST appropriate?
When AI outputs could cause physical harm, active mitigation is required. Risk acceptance is inappropriate given the potential for physical injury. Risk transfer (insurance) doesn't prevent the harm. Risk avoidance may be disproportionate if mitigation can effectively reduce the risk. Content guardrails, human escalation, and monitoring directly address the identified risk.
🎯
Day 20 Complete
"AI Impact Assessments evaluate risks before deployment. Risk scoring should include reversibility alongside likelihood and severity. Risk acceptance must be a documented, deliberate decision — never a default."
Next Lesson
Model Evaluation, Testing, and Validation