All Lessons Course Details All Courses Enroll
Courses/ ISACA AAISM Certification Prep/ Day 7
Day 7 of 18

AI Risk Identification and Assessment

⏱ 20 min 📊 Advanced ISACA AAISM Certification Prep

Welcome to Domain 2. You already understand risk management from CISM/CISSP. This domain focuses on what's different about AI risk: new risk categories, new assessment methods, and the challenge of managing risk in systems that learn and change over time.

AI risk categories

AI introduces risk categories that don't exist in traditional IT risk frameworks:

Adversarial risks — Intentional attacks on AI systems: data poisoning, model evasion, prompt injection, model extraction, and membership inference attacks. These are the AI equivalent of traditional cyber attacks.

Operational risks — Unintentional failures: model drift, data quality degradation, infrastructure failures, and integration errors. These are often more likely than adversarial attacks and can cause equal damage.

Ethical risks — Bias, fairness violations, lack of transparency, and privacy violations. These may not be "security" risks in the traditional sense, but they carry significant regulatory, legal, and reputational consequences.

Regulatory risks — Non-compliance with AI-specific regulations (EU AI Act, sector-specific rules). The regulatory landscape is evolving rapidly, creating compliance uncertainty.

Reputational risks — Public perception of AI failures. A biased hiring AI or a chatbot producing harmful content can damage brand reputation far beyond the direct impact.

Systemic risks — Risks that emerge from the interaction of multiple AI systems or widespread AI adoption. Correlated failures across models using similar training data or architectures.

A comprehensive AI risk assessment must cover all six categories, not just adversarial risks.

AI risk assessment matrix showing likelihood versus impact with AI-specific risk categories mapped to each cell
A classic risk matrix adapted for AI. Add the reversibility dimension for more accurate AI risk scoring.

NIST AI RMF Map function

The NIST AI RMF Map function provides a structured approach to AI risk identification:

Map 1: Context — Establish the context for risk identification. What is the AI system's intended purpose, deployment environment, and affected stakeholders?

Map 2: Categorize — Classify the AI system's risk level based on its function, data, autonomy level, and impact potential. This maps to the EU AI Act risk classification (unacceptable, high, limited, minimal).

Map 3: Identify risks — Systematically identify risks across all categories. Use threat modeling adapted for AI: what can go wrong with the model, the data, the infrastructure, and the human interactions?

Map 4: Assess — Evaluate identified risks by likelihood, impact severity, and reversibility. AI adds a unique dimension: reversibility. Can you undo the damage? A wrong recommendation is reversible. A biased hiring decision that wasn't caught for months is much harder to reverse.

The Map function feeds into the Measure and Manage functions (covered in subsequent lessons) to create the complete risk management lifecycle.

Knowledge Check
An AI system is used to prioritize customer service tickets. It processes no personal data and has no regulatory requirements. A risk assessment identifies that the model occasionally deprioritizes urgent tickets. How should this risk be classified?
Risk classification should be **proportionate to actual impact.** No personal data and no regulation reduces some risk categories, but deprioritizing urgent tickets has real operational and customer impact. Moderate operational risk with proportionate controls is the appropriate classification.

EU AI Act risk classification

The EU AI Act provides a practical framework for internal AI risk classification, even outside the EU:

Unacceptable risk — Prohibited AI practices. Social scoring, real-time biometric identification in public spaces (with exceptions), manipulation of vulnerable groups, and emotion recognition in workplace/education settings.

High risk — AI systems in critical areas: employment/hiring, credit scoring, education, healthcare, law enforcement, migration, and critical infrastructure. These require conformity assessments, human oversight, and documentation.

Limited risk — AI systems that interact with people (chatbots), generate content (deepfakes), or perform emotion recognition. Require transparency — users must know they're interacting with AI.

Minimal risk — Everything else. Spam filters, recommendation engines, AI-powered search. No specific requirements beyond general product safety.

For the exam, understand why a system falls into each category and what obligations each category triggers.

Building an AI risk register

Your AI risk register should include AI-specific fields beyond a standard risk register:

AI system identifier — Link to the AI asset inventory.

Risk category — Adversarial, operational, ethical, regulatory, reputational, systemic.

Likelihood assessment — Consider both current likelihood and how likelihood may change as the model ages, data distributions shift, or threat actors develop new techniques.

Impact assessment — Include all impact dimensions: financial, regulatory, reputational, safety, and rights-related.

Reversibility — How easily can the impact be reversed? This is a key differentiator for AI risk.

Detection capability — How quickly would you detect this risk materializing? AI risks can go undetected for months (gradual bias drift).

Controls in place — Current controls and their effectiveness.

Treatment plan — Planned risk treatment with timeline and responsible party.

Review the AI risk register at least quarterly. High-risk AI systems should have more frequent reviews.

Knowledge Check
A risk assessment identifies that an AI fraud detection model could be evaded by sophisticated attackers who have studied the model's behavior patterns. The likelihood is assessed as low, but the impact would be high. What additional AI-specific factor should MOST influence the risk rating?
**Detection capability** is critical for AI risk. Low-likelihood, high-impact risks become much more dangerous when detection is slow. If systematic model evasion could go undetected for months, the cumulative impact is far higher than a single event. This AI-specific factor should elevate the risk rating.

Quantitative vs. qualitative assessment

Both approaches have a place in AI risk assessment:

Qualitative — Risk matrices (likelihood × impact), expert judgment, scenario analysis. Easier to communicate. Appropriate for ethical and reputational risks where quantification is difficult. Most organizations start here.

Quantitative — Monte Carlo simulations, loss distribution modeling, FAIR analysis adapted for AI. More precise. Appropriate for operational and financial risks where historical data exists. Requires more expertise and data.

Hybrid approach — Use qualitative assessment for initial screening and prioritization. Apply quantitative methods to high-risk AI systems where precision justifies the investment.

The exam won't ask you to perform calculations, but you should know when each approach is appropriate and the limitations of each.

Final Check
An organization is conducting its first AI risk assessment. The team has limited experience with AI risk and no historical AI incident data. Which assessment approach is MOST appropriate?
For a first assessment with limited experience and no historical data, **qualitative assessment** is most appropriate. It leverages available expertise, produces actionable results, and establishes the foundation for more sophisticated approaches as maturity grows. Waiting for quantitative data delays risk management.
🎲
Day 7 Complete
"AI risk has six categories — adversarial, operational, ethical, regulatory, reputational, and systemic. A comprehensive assessment covers all six, not just adversarial attacks."
Next Lesson
Risk Thresholds, Treatment, and Residual Risk