Welcome to Domain 2. You already understand risk management from CISM/CISSP. This domain focuses on what's different about AI risk: new risk categories, new assessment methods, and the challenge of managing risk in systems that learn and change over time.
AI introduces risk categories that don't exist in traditional IT risk frameworks:
Adversarial risks — Intentional attacks on AI systems: data poisoning, model evasion, prompt injection, model extraction, and membership inference attacks. These are the AI equivalent of traditional cyber attacks.
Operational risks — Unintentional failures: model drift, data quality degradation, infrastructure failures, and integration errors. These are often more likely than adversarial attacks and can cause equal damage.
Ethical risks — Bias, fairness violations, lack of transparency, and privacy violations. These may not be "security" risks in the traditional sense, but they carry significant regulatory, legal, and reputational consequences.
Regulatory risks — Non-compliance with AI-specific regulations (EU AI Act, sector-specific rules). The regulatory landscape is evolving rapidly, creating compliance uncertainty.
Reputational risks — Public perception of AI failures. A biased hiring AI or a chatbot producing harmful content can damage brand reputation far beyond the direct impact.
Systemic risks — Risks that emerge from the interaction of multiple AI systems or widespread AI adoption. Correlated failures across models using similar training data or architectures.
A comprehensive AI risk assessment must cover all six categories, not just adversarial risks.
The NIST AI RMF Map function provides a structured approach to AI risk identification:
Map 1: Context — Establish the context for risk identification. What is the AI system's intended purpose, deployment environment, and affected stakeholders?
Map 2: Categorize — Classify the AI system's risk level based on its function, data, autonomy level, and impact potential. This maps to the EU AI Act risk classification (unacceptable, high, limited, minimal).
Map 3: Identify risks — Systematically identify risks across all categories. Use threat modeling adapted for AI: what can go wrong with the model, the data, the infrastructure, and the human interactions?
Map 4: Assess — Evaluate identified risks by likelihood, impact severity, and reversibility. AI adds a unique dimension: reversibility. Can you undo the damage? A wrong recommendation is reversible. A biased hiring decision that wasn't caught for months is much harder to reverse.
The Map function feeds into the Measure and Manage functions (covered in subsequent lessons) to create the complete risk management lifecycle.
The EU AI Act provides a practical framework for internal AI risk classification, even outside the EU:
Unacceptable risk — Prohibited AI practices. Social scoring, real-time biometric identification in public spaces (with exceptions), manipulation of vulnerable groups, and emotion recognition in workplace/education settings.
High risk — AI systems in critical areas: employment/hiring, credit scoring, education, healthcare, law enforcement, migration, and critical infrastructure. These require conformity assessments, human oversight, and documentation.
Limited risk — AI systems that interact with people (chatbots), generate content (deepfakes), or perform emotion recognition. Require transparency — users must know they're interacting with AI.
Minimal risk — Everything else. Spam filters, recommendation engines, AI-powered search. No specific requirements beyond general product safety.
For the exam, understand why a system falls into each category and what obligations each category triggers.
Your AI risk register should include AI-specific fields beyond a standard risk register:
AI system identifier — Link to the AI asset inventory.
Risk category — Adversarial, operational, ethical, regulatory, reputational, systemic.
Likelihood assessment — Consider both current likelihood and how likelihood may change as the model ages, data distributions shift, or threat actors develop new techniques.
Impact assessment — Include all impact dimensions: financial, regulatory, reputational, safety, and rights-related.
Reversibility — How easily can the impact be reversed? This is a key differentiator for AI risk.
Detection capability — How quickly would you detect this risk materializing? AI risks can go undetected for months (gradual bias drift).
Controls in place — Current controls and their effectiveness.
Treatment plan — Planned risk treatment with timeline and responsible party.
Review the AI risk register at least quarterly. High-risk AI systems should have more frequent reviews.
Both approaches have a place in AI risk assessment:
Qualitative — Risk matrices (likelihood × impact), expert judgment, scenario analysis. Easier to communicate. Appropriate for ethical and reputational risks where quantification is difficult. Most organizations start here.
Quantitative — Monte Carlo simulations, loss distribution modeling, FAIR analysis adapted for AI. More precise. Appropriate for operational and financial risks where historical data exists. Requires more expertise and data.
Hybrid approach — Use qualitative assessment for initial screening and prioritization. Apply quantitative methods to high-risk AI systems where precision justifies the investment.
The exam won't ask you to perform calculations, but you should know when each approach is appropriate and the limitations of each.