All Lessons Course Details All Courses Enroll
Courses/ ISACA AAISM Certification Prep/ Day 2
Day 2 of 18

AI-Specific Security Policies and Procedures

⏱ 18 min 📊 Advanced ISACA AAISM Certification Prep

Your organization already has information security policies. Today we examine why AI requires distinct policies — not just addendums to existing ones — and how to develop standards and guidelines that enable secure AI implementation without blocking innovation.

Why AI policies are different from IT security policies

Traditional IT security policies address known system types: servers, networks, databases, applications. AI systems break these assumptions in fundamental ways.

Non-deterministic behavior — The same input can produce different outputs. Traditional security policies assume predictable system behavior.

Learning from data — AI systems change their behavior based on training data. A policy that's satisfied at deployment may be violated after retraining.

Emergent capabilities — Large models exhibit behaviors not explicitly programmed. Policies must address capabilities that weren't anticipated during development.

Third-party model dependencies — Using pre-trained models or APIs means inheriting security properties you didn't design and can't fully audit.

Output as action — AI outputs increasingly drive automated decisions. A misconfigured firewall rule is bad; a biased AI loan decision is a regulatory violation.

Your AI security policies must account for these differences explicitly. An "AI addendum" to existing policies will miss critical gaps.

Core AI policy categories

A mature AI security policy framework includes these categories:

Acceptable AI Use Policy — Defines what AI can and cannot be used for. Addresses shadow AI (employees using unauthorized AI tools), prohibited use cases (autonomous weapons, social scoring), and acceptable use of generative AI with corporate data.

AI Development Policy — Standards for secure AI development: data handling, model training environments, testing requirements, code review for ML pipelines, and documentation requirements.

AI Procurement Policy — Requirements for purchasing AI products or services: vendor security assessments, model transparency requirements, data handling agreements, and exit strategies.

AI Model Governance Policy — Lifecycle management: model registration, approval workflows, version control, performance monitoring, retraining criteria, and retirement procedures.

AI Data Governance Policy — Controls for AI training data: collection consent, quality standards, bias testing, retention, and deletion requirements.

AI Incident Response Policy — AI-specific incident definitions, escalation criteria, and response procedures that extend your existing IR program.

Knowledge Check
A department is using a public generative AI tool to summarize confidential merger documents. Which policy gap does this PRIMARILY represent?
This is first and foremost an **Acceptable AI Use** issue. The organization lacks a policy defining which AI tools may be used with which data classifications. While procurement and incident response may also apply, the root policy gap is acceptable use.

Embedding ethical AI principles

AI policies must go beyond traditional security concerns to address ethical dimensions. This isn't optional — it's increasingly a regulatory requirement.

Fairness — Policies should require bias testing before deployment and ongoing monitoring for discriminatory outcomes.

Transparency — Define when AI use must be disclosed to affected parties. The EU AI Act requires disclosure for certain AI interactions.

Accountability — Establish who is responsible when AI causes harm. "The algorithm did it" is not an acceptable answer.

Human oversight — Define when human review is required for AI decisions. High-stakes decisions (hiring, lending, medical diagnosis) typically require human-in-the-loop.

Privacy by design — Require privacy impact assessments for AI systems that process personal data. Integrate GDPR/privacy requirements into the AI development lifecycle.

These principles should be embedded in policy, not aspirational statements in a code of conduct.

Regulatory alignment

Your AI policies must align with the regulatory landscape, which is evolving rapidly.

EU AI Act — Risk-based classification with specific requirements per risk level. High-risk AI systems require conformity assessments, documentation, and human oversight. Effective enforcement began in phases from 2024.

NIST AI RMF Govern function — Provides policy guidance for AI risk management. Maps to organizational policies, processes, and procedures.

Sector-specific regulations — Financial services (model risk management — SR 11-7/SS1/23), healthcare (FDA guidance on AI/ML SaMD), and employment (NYC Local Law 144 for automated employment decisions).

Emerging state/national laws — Colorado AI Act, proposed federal AI legislation. Your policy framework should be adaptable to new requirements.

The exam expects you to recognize which regulation applies in a scenario, not to memorize regulation text.

Knowledge Check
An organization is deploying an AI system that will make hiring recommendations. Which regulatory consideration is MOST critical for the security manager to address in policy?
AI in hiring is subject to **anti-discrimination laws and specific employment AI regulations** (like NYC Local Law 144). While GDPR and EU AI Act may also apply, the most critical regulatory concern for hiring AI is ensuring it doesn't discriminate — this carries the highest legal and reputational risk.

From policy to procedure

Policies state intent. Procedures make them operational. For every AI policy, develop supporting procedures:

Policy: "All AI models must be registered before deployment."

Procedure: Step-by-step process for model registration, including required documentation, approval workflow, and registration database entry.

Policy: "AI systems processing personal data must complete a privacy impact assessment."

Procedure: PIA template for AI systems, assessment criteria, reviewer roles, and escalation for high-risk findings.

Policy: "AI incidents must be reported within 24 hours."

Procedure: AI incident classification criteria, reporting channels, initial response actions, and regulatory notification triggers.

Without procedures, policies become shelfware. Without policies, procedures lack authority. You need both.

Final Check
The security team has written comprehensive AI security policies. Six months later, engineering teams are routinely deploying AI models without governance review. What is the MOST LIKELY root cause?
The most common reason for policy non-compliance is the **gap between policy and procedure.** Policies state what should happen; procedures define how. Without clear procedures, approval workflows, and enforcement mechanisms, policies remain theoretical.
📋
Day 2 Complete
"AI policies must be distinct from IT security policies — they address non-deterministic behavior, learning systems, and ethical dimensions that traditional policies don't cover."
Next Lesson
AI Asset and Data Lifecycle Management