All Lessons Course Details All Courses Enroll
Courses/ ISACA AAISM Certification Prep/ Day 17
Day 17 of 18

Domain 3 Capstone: Controls Assessment

⏱ 20 min 📊 Advanced ISACA AAISM Certification Prep

Today we apply Days 12-16 to a comprehensive controls assessment scenario. The key pattern for Domain 3 questions: choose the root cause control, not the symptom control.

Scenario: AI fraud detection at a bank

Background: GlobalBank has deployed FraudShield, an AI-based fraud detection system that processes all card transactions in real time. The system:

- Processes 2 million transactions daily

- Uses a gradient-boosted ensemble model trained on 3 years of transaction data

- Is the sole fraud detection mechanism (replaced the rule-based system 6 months ago)

- Runs on cloud infrastructure (AWS SageMaker)

- Was developed by the internal data science team using open-source libraries

- Has reduced fraud losses by 40% since deployment

The CISO has asked you to conduct a comprehensive controls assessment. Identify gaps and recommend improvements.

Scenario Question 1
The assessment reveals that FraudShield has no documented architecture review. The model was deployed directly from the data science team's development environment to production. What is the PRIMARY control gap?
**Governance gap.** The primary issue isn't a specific technical control — it's the absence of the governance process that would identify and require technical controls. Without an architecture review and deployment approval gate, any number of technical controls could be missing.
Scenario Question 2
FraudShield replaced the rule-based system entirely. There is no fallback mechanism if FraudShield fails. Transaction processing would continue without fraud detection. What type of control is needed?
**Business continuity control.** The risk isn't just system failure — it's processing transactions without any fraud detection. A fallback procedure (whether rule-based or manual) ensures continued fraud detection during AI system outages. High availability (load balancing) helps but doesn't address total system failure.
Scenario Question 3
The training data includes 3 years of transaction data. No data quality checks were performed before training, and the data has never been assessed for demographic bias. Which control should be implemented FIRST?
**Assess before acting.** You don't know the extent of the problem yet. A data quality and bias assessment tells you what you're dealing with. It may find minimal bias (no action needed) or significant issues (remediation required). Acting without assessment risks both under- and over-response.
Scenario Question 4
The model runs on AWS SageMaker. The assessment reveals that the data science team has full administrative access to the production SageMaker environment, the same environment where they develop and test models. What is the root cause control gap?
**Root cause control.** Access restrictions, approval workflows, and logging are all valid controls, but they address symptoms. The root cause is that development and production share the same environment. Separating environments makes the other controls effective and prevents accidental or unauthorized changes to production.
Scenario Question 5
The model uses open-source libraries (scikit-learn, XGBoost, pandas) that were installed 6 months ago and haven't been updated since. No vulnerability scanning has been performed. What is the MOST appropriate control?
**Systematic control, not one-time fix.** Updating now is necessary but insufficient — the same gap will recur. Dependency management includes scanning (know what's vulnerable), version pinning (control what's installed), and a patching process (update safely). This is supply chain security for ML infrastructure.
Scenario Question 6
The model processes 2 million transactions daily. Monitoring consists of a daily accuracy report sent to the data science team via email. No real-time monitoring exists. What should the monitoring improvement prioritize?
**Comprehensive monitoring.** A fraud detection system handling 2 million daily transactions needs monitoring across all dimensions — not just accuracy. Security monitoring catches adversarial attacks. Drift monitoring catches data changes. Fairness monitoring catches disparate impact. All are critical for a financial services AI system.
Scenario Question 7
During the assessment, you discover that model outputs (fraud/not-fraud decisions) are logged, but model inputs (transaction details) are not logged. The data science team says input logging would create a copy of sensitive financial data. How should this be addressed?
**Balanced control.** For a financial services fraud detection system, audit trails are a regulatory requirement. The data protection concern is valid but manageable through appropriate controls. Don't choose between audit completeness and data protection — implement both.
Scenario Question 8
The assessment identifies that no adversarial testing has ever been performed on FraudShield. The data science team says the model's complexity makes it resistant to evasion. What is the BEST response?
**Controls must be verified, not assumed.** Model complexity doesn't guarantee adversarial robustness. Adversarial testing should be both automated (regular, reproducible) and manual (red team exercises that simulate realistic attack scenarios). The data science team's opinion is not a substitute for testing.
Scenario Question 9
You present your assessment findings to the CISO: 12 control gaps identified across governance, architecture, data, monitoring, and security. Budget allows addressing 8 this fiscal year. How do you prioritize?
**Risk-based prioritization.** When you can't do everything, do what reduces the most risk. Risk rating (likelihood × impact × criticality) provides an objective basis for prioritization. Some governance gaps may rank highest, but the prioritization should be based on risk, not on category.
Scenario Question 10
After remediation, the CISO asks: "Are we secure now?" What is the MOST accurate and appropriate response?
**Honest, specific, and professional.** Acknowledge the improvement (credibility), note that residual risk is managed (accountability), and highlight continuous monitoring (maturity). Avoid both false assurance and vague platitudes. The CISO needs actionable information, not philosophy.

Key Domain 3 patterns

1. Root cause over symptom. When multiple controls could address an issue, choose the one that addresses the underlying cause, not just the visible symptom.

2. Preventive over detective. When both options exist, prefer controls that prevent the issue. Detective controls are the backup when prevention isn't possible.

3. Comprehensive over point solutions. A monitoring program beats a monitoring tool. A governance process beats a one-time assessment.

4. Proportionate to risk. High-risk AI systems get comprehensive controls. Low-risk systems get proportionate controls.

5. Verify, don't assume. "The model is complex" isn't a control. "Adversarial testing shows the model resists known evasion techniques" is a control.

🔍
Day 17 Complete — Domain 3 Done
"Choose root cause controls over symptom controls. Verify security properties through testing — don't assume them based on model complexity or team assurance."
Next Lesson
Exam Strategy, ISACA Mindset, and Practice Assessment