All Lessons Course Details All Courses Enroll
Courses/ AIGP Certification Prep/ Day 24
Day 24 of 30

Domain III Capstone — Mock Development Governance Review

⏱ 20 min 📊 Advanced AIGP Certification Prep

Today's capstone applies everything from Domain III to a comprehensive case study. You'll work through governing an AI lending model from design to deployment — and answer 10 scenario-based questions like those on the real exam.

Case Study — QuickLend AI

Background: QuickLend Financial Services is developing an AI system to automate consumer loan decisions for applications under $50,000. The system will use applicant data (income, employment history, credit score, education, zip code) to generate a risk score and an approve/deny recommendation.

Deployment context: The system will be deployed in the United States and the European Union. It will process approximately 10,000 applications per month. For applications under $10,000, the AI's recommendation is automatically actioned (no human review). For applications $10,000–$50,000, a human loan officer reviews the AI's recommendation before making the final decision.

Development team: 4 data scientists, 1 ML engineer. No dedicated governance or legal personnel on the team.

Scenario Question 1
Before development begins, what is the FIRST governance action QuickLend should take?
The first step in the AI development lifecycle is problem formulation and use case assessment. Before collecting data or building models, QuickLend must classify the risk level (this is high-risk under both the EU AI Act and US lending regulations) and identify all applicable regulatory requirements (ECOA, FCRA, GDPR, EU AI Act).
Scenario Question 2
QuickLend plans to train its model on 5 years of historical loan decisions. What is the PRIMARY data governance concern?
Historical lending data is likely to reflect past discrimination — documented bias in lending against minorities, women, and other protected groups. Training on this data risks embedding and perpetuating that discrimination. This is the most critical data governance concern, requiring bias assessment and mitigation before training begins.
Scenario Question 3
The model uses "zip code" as a predictive feature. A governance review flags this. Why?
Zip codes correlate with race due to historical residential segregation patterns. Using zip code as a feature can produce discriminatory outcomes even though race isn't explicitly included — this is the proxy discrimination problem central to fair lending compliance.
Scenario Question 4
For loans under $10,000 where the AI's decision is automatic, which governance concern is MOST acute?
Automatically actioned AI decisions with legal/significant effects (loan approval/denial) trigger GDPR Article 22. QuickLend must implement safeguards: right to human intervention, right to express a point of view, and right to contest the decision.
Scenario Question 5
The fairness audit reveals that the model approves loans for applicants from majority-white zip codes at 52% but only 38% for applicants from majority-minority zip codes. Under ECOA, QuickLend should:
A 14-percentage-point disparity correlated with race creates significant disparate impact risk under ECOA. QuickLend must investigate, assess whether the use of zip code is necessary and justified, and implement mitigation. Simply removing zip code may not fix proxy effects from other correlated features.
Scenario Question 6
Since the system will be deployed in the EU for loan decisions, it is classified as high-risk under the EU AI Act. Which conformity assessment procedure applies?
Credit scoring AI is classified as high-risk under Annex III (essential services) but uses the internal conformity assessment procedure. Third-party assessment by a notified body is reserved for specific categories like biometric identification.
Scenario Question 7
The development team has no dedicated governance or legal personnel. What is the MOST critical organizational risk?
Without dedicated governance or legal expertise, the team risks missing regulatory requirements (ECOA, FCRA, GDPR, EU AI Act) and implementing inadequate controls. The most critical risk is governance gaps, not timeline or accuracy concerns.
Scenario Question 8
The model documentation was written by the data science team after deployment. An audit finds several development decisions are not documented. What is the governance assessment?
Post-hoc documentation is unreliable because it cannot accurately reconstruct development-phase decisions, data choices, and rationale. This is a governance process failure that should be corrected for future projects. The current documentation should be flagged as incomplete.
Scenario Question 9
During the go/no-go review, technical tests pass but the FRIA has not been completed for the EU deployment. The product team argues the FRIA can be done post-launch. What should the governance committee decide?
The FRIA is a mandatory deployer obligation under the EU AI Act for high-risk AI systems. It must be completed BEFORE first use. The governance committee cannot approve EU deployment without it. US deployment may proceed if US requirements are met, but this should be a separate decision.
Scenario Question 10
QuickLend decides on a conditional deployment: shadow mode for 30 days, followed by limited rollout in one market, then full deployment. What monitoring metrics should be defined BEFORE shadow mode begins?
Monitoring metrics must be defined BEFORE deployment (even shadow mode) so that results can be evaluated against predetermined thresholds. Waiting to define metrics after deployment means there are no baselines for comparison. The metrics must cover performance, fairness, drift, and escalation — not just accuracy.
🎯
Day 24 Complete
"Governing AI development requires governance at every stage: use case assessment before building, data bias checks before training, fairness audits before testing, and comprehensive readiness review before deployment. Documentation must be contemporaneous."
Next Lesson
Continuous Monitoring of Deployed AI Systems