All Lessons Course Details All Courses Enroll
Courses/ CompTIA SecAI+ Prep/ Day 20
Day 20 of 21

AI Compliance — Laws, Standards, and Corporate Policy

⏱ 20 min 📊 Medium CompTIA SecAI+ Prep

AI governance structures and responsible AI principles only matter if they are backed by enforceable rules. Today we move from principles to regulations, standards, and corporate policies — the compliance landscape that every AI security professional must navigate. The regulatory environment for AI is evolving rapidly, and the CY0-001 exam expects you to understand the major frameworks, their requirements, and how organizations translate external mandates into internal policies. This lesson maps to CY0-001 Objective 4.3.

The EU AI Act — Risk-Based Regulation

The EU AI Act is the world's first comprehensive legal framework specifically designed to regulate artificial intelligence. It entered into force in August 2024 and takes a risk-based approach — the stricter the potential harm, the more stringent the requirements. For the exam, you need to understand the four risk classifications and what each requires.

Unacceptable risk AI systems are outright prohibited. These include AI systems that deploy subliminal manipulation techniques to distort behavior in ways that cause harm, social scoring systems used by governments that evaluate citizens based on social behavior, real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions), and systems that exploit vulnerabilities of specific groups such as children or persons with disabilities. If you encounter an exam question about a government using AI to score citizens' social behavior to determine access to services, the answer is "prohibited under the EU AI Act."

High-risk AI systems are permitted but subject to extensive obligations. High-risk categories include AI used in critical infrastructure (energy, water, transportation), educational and vocational training (AI that determines access to education), employment (hiring, promotion, termination decisions), essential services (credit scoring, insurance pricing), law enforcement (predictive policing, evidence evaluation), migration and border control, and administration of justice. High-risk systems must meet requirements for risk management systems, data governance, technical documentation, record-keeping, transparency and information to users, human oversight, accuracy, robustness, and cybersecurity, and conformity assessment before deployment.

Limited risk AI systems face transparency obligations. This primarily covers AI systems that interact with people (chatbots must disclose they are AI), generate synthetic content (deepfakes must be labeled), or use emotion recognition or biometric categorization (users must be informed). The key requirement is disclosure — users must know they are interacting with AI.

Minimal risk AI systems — such as spam filters, AI-enabled video games, and inventory management systems — face no specific obligations under the EU AI Act, though voluntary codes of conduct are encouraged.

OECD AI Standards

The Organisation for Economic Co-operation and Development (OECD) published its AI Principles in 2019, making it one of the earliest international frameworks for AI governance. While not legally binding, the OECD AI Principles have been adopted or referenced by over 40 countries and have significantly influenced subsequent regulations, including the EU AI Act.

The OECD establishes five principles for responsible stewardship of trustworthy AI. Inclusive growth, sustainable development, and well-being — AI should benefit people and the planet. Human-centered values and fairness — AI should respect human rights, democratic values, and diversity, with safeguards to ensure fairness. Transparency and explainability — people should understand when they are interacting with AI and be able to challenge AI-driven outcomes. Robustness, security, and safety — AI systems should function reliably and securely throughout their lifecycle, and potential risks should be continuously assessed and managed. Accountability — organizations developing and deploying AI should be accountable for their AI systems' adherence to these principles.

The OECD also provides five recommendations for national policies. These include investing in AI research and development, fostering a digital ecosystem for AI, shaping an enabling policy environment, building human capacity and preparing for labor market transformation, and promoting international cooperation for trustworthy AI.

For the exam, understand that the OECD AI Principles serve as a foundational reference that many national and organizational AI governance frameworks build upon. When a question asks about international AI governance standards, the OECD is a key answer.

ISO AI Standards

The International Organization for Standardization (ISO) has published several standards relevant to AI governance and security. Two are particularly important for the CY0-001 exam.

ISO/IEC 42001 is the international standard for an AI Management System (AIMS). It provides a structured framework for organizations to establish, implement, maintain, and continually improve their AI management. Think of it as ISO 27001 (information security management) but specifically tailored for AI. ISO 42001 covers organizational context and AI objectives, leadership commitment and policy, planning for risks and opportunities specific to AI, resource allocation and competency requirements, operational controls for AI development and deployment, performance evaluation and monitoring, and continual improvement processes. Organizations can pursue certification against ISO 42001, which provides third-party validation that their AI management practices meet international standards. This certification is increasingly valuable for demonstrating governance maturity to regulators, customers, and partners.

ISO/IEC 23894 provides guidelines for AI risk management. It extends the general risk management standard (ISO 31000) with AI-specific guidance. ISO 23894 helps organizations identify, analyze, evaluate, and treat risks specific to AI systems, including risks related to bias, transparency, accountability, and robustness. It provides a systematic approach to understanding and managing the unique risk characteristics of AI — such as the difficulty of predicting AI system behavior, the potential for emergent properties in complex models, and the challenges of ensuring AI reliability over time.

Other relevant ISO standards include ISO/IEC 38507 (governance implications of AI), ISO/IEC 22989 (AI concepts and terminology), and ISO/IEC 23053 (framework for AI systems using ML). For exam purposes, focus on 42001 for management systems and 23894 for risk management.

NIST AI Risk Management Framework

The NIST AI Risk Management Framework (AI RMF) was published by the U.S. National Institute of Standards and Technology in January 2023. It is a voluntary framework designed to help organizations manage risks associated with AI systems throughout their lifecycle. The AI RMF is structured around four core functions: Govern, Map, Measure, and Manage.

Govern is the foundational function that establishes the organizational structures, policies, and processes for AI risk management. Govern encompasses cultivating a risk-aware culture, defining roles and responsibilities, establishing policies and procedures, and ensuring accountability. This function connects directly to the governance structures we discussed in Lesson 18 — the AI CoE, governance policies, and specialized roles all fall under the Govern function.

Map focuses on understanding the context in which AI systems operate. This includes identifying the intended purpose and use cases of the AI system, understanding the potential impacts on individuals and communities, characterizing the data and its provenance, mapping the system's dependencies and stakeholders, and identifying potential risks and benefits. The Map function ensures that organizations have a comprehensive understanding of their AI systems before attempting to measure or manage risks.

Measure involves assessing, analyzing, and tracking AI risks using quantitative and qualitative methods. This includes evaluating model performance, testing for bias and fairness, assessing robustness against adversarial attacks, measuring transparency and explainability, and tracking risk metrics over time. The Measure function produces the evidence that informs risk management decisions.

Manage focuses on prioritizing and acting on AI risks based on the assessments from the Measure function. This includes implementing risk mitigation strategies, allocating resources to risk treatment, establishing monitoring and response procedures, and planning for risk that cannot be fully eliminated. The Manage function also covers communicating residual risks to stakeholders and documenting risk management decisions.

For the exam, remember the four functions — Govern, Map, Measure, Manage — and understand that they form a continuous cycle, not a one-time process. The NIST AI RMF is designed to be used iteratively as AI systems evolve and as the risk landscape changes.

Comparison matrix of EU AI Act, NIST AI RMF, and ISO 42001 — regulation type, scope, focus, and enforcement
Three frameworks, three approaches. The EU AI Act is mandatory regulation; NIST AI RMF is voluntary; ISO 42001 is certifiable.

Corporate Policies — Sanctioned vs. Unsanctioned AI

Regulatory frameworks set the external boundaries; corporate policies translate those boundaries into specific organizational rules. One of the most critical policy distinctions is between sanctioned and unsanctioned AI tools and models.

Sanctioned AI tools are those that have been reviewed, approved, and authorized by the organization's governance body (typically the AI CoE or an equivalent authority). Sanctioned tools have undergone security assessment, legal review, privacy impact analysis, and procurement vetting. They are covered by the organization's incident response plan, their data handling practices are understood and documented, and their use is governed by specific policies and procedures. Examples include an organization's approved cloud AI platform, a vetted internal ML framework, or a commercially licensed AI tool with an enterprise agreement.

Unsanctioned AI tools are any AI tools used by employees that have not been through the approval process. This is the Shadow AI problem discussed in the previous lesson. Corporate policy should explicitly list categories of unsanctioned tools, define consequences for their use, and provide a clear process for employees to request evaluation of new tools they want to use. The goal is not to block innovation but to ensure that every AI tool in use has been assessed for risk.

Private vs. public models represent another critical policy decision. Private models are trained and hosted within the organization's infrastructure or a trusted private cloud. They offer greater control over data, security, and model behavior, but require significant investment in infrastructure and expertise. Public models are offered as services by third-party providers — cloud-based LLMs, hosted inference APIs, and AI-as-a-service platforms. Public models are faster to deploy and lower in upfront cost, but introduce third-party risk, data sovereignty concerns, and limited control over model behavior and updates.

Corporate policy should define when each type is appropriate. Sensitive use cases involving regulated data, proprietary information, or high-stakes decisions may require private models. Lower-sensitivity use cases like internal productivity tools may be appropriate for vetted public models with proper data handling controls.

Sensitive Data Governance for AI

AI systems are hungry for data, and this creates unique challenges when that data is sensitive, classified, or regulated. Corporate policies must address what data AI systems can access and under what conditions.

Data classification policies should be extended to cover AI-specific use cases. Data classified as confidential, restricted, or top secret may require additional controls when used for AI training or inference — such as anonymization, synthetic data substitution, differential privacy techniques, or restriction to on-premises processing only. Policies should explicitly state which data classification levels are approved for which types of AI processing.

Regulated data — including data protected by GDPR, HIPAA, PCI DSS, CCPA, and industry-specific regulations — requires special handling in AI contexts. For example, GDPR's right to erasure creates challenges when personal data has been used to train a model, as removing an individual's influence from a trained model is not straightforward (a problem known as machine unlearning). HIPAA's minimum necessary standard may conflict with the desire to train medical AI models on comprehensive patient records. Policies must address these tensions and define approved approaches.

Data sovereignty requirements specify that data must remain within certain geographic boundaries. This has direct implications for AI: cloud-based AI services may process data in data centers located in other countries, and model training may involve transferring data to GPU clusters in specific regions. Corporate policy must ensure that AI data processing complies with applicable data sovereignty laws, which may require using region-specific cloud deployments, on-premises infrastructure, or contractual guarantees from AI service providers.

Third-Party Compliance and Vendor Assessment

Organizations that use third-party AI tools and services must evaluate those providers against compliance requirements. Third-party compliance evaluation is a critical component of AI governance that extends traditional vendor risk management with AI-specific criteria.

When evaluating AI vendors, organizations should assess data handling practices — how does the vendor store, process, and protect data submitted to their AI service? Is customer data used for model training? Can data be deleted on request? Security posture — what security controls does the vendor implement? Do they have SOC 2, ISO 27001, or other relevant certifications? What is their incident response process? Model governance — how does the vendor manage model updates and changes? Will model behavior change without customer notification? Can the customer audit model performance? Regulatory compliance — does the vendor comply with applicable regulations (GDPR, HIPAA, EU AI Act)? Do they provide documentation to support the customer's compliance obligations?

Data sovereignty requirements deserve special attention in third-party evaluations. Organizations must verify where the vendor processes data, whether data crosses national boundaries during processing, and whether the vendor offers region-specific deployments that keep data within required jurisdictions. Contractual provisions should explicitly address data location, data handling, and the vendor's obligations if regulations change.

The vendor evaluation process should be documented and repeatable, with standardized questionnaires, scoring criteria, and approval workflows. Approved vendors should be subject to periodic reassessment, and the results should feed into the organization's overall AI risk register.

For the exam, remember that third-party AI tools do not absolve the organization of compliance responsibility. The organization remains accountable for how its data is handled, even when processed by a vendor's AI system. This principle is consistent with existing regulatory frameworks like GDPR, which holds data controllers responsible regardless of which processors they use.

Knowledge Check
Under the EU AI Act, which risk classification applies to a government system that scores citizens based on social behavior to determine access to public services?
Social scoring systems used by governments that evaluate citizens based on social behavior are classified as unacceptable risk under the EU AI Act and are prohibited outright.
Knowledge Check
The NIST AI RMF is structured around four core functions. Which of the following correctly lists all four?
The NIST AI RMF uses four core functions — Govern (establish structures and policies), Map (understand context and risks), Measure (assess and track risks), and Manage (prioritize and act on risks). The other options describe different frameworks (Deming cycle, NIST CSF, and a generic GRC process).
Knowledge Check
Which ISO standard provides a framework for an AI Management System (AIMS), analogous to ISO 27001 for information security?
ISO/IEC 42001 is the standard for AI Management Systems (AIMS), providing a structured framework for establishing, implementing, maintaining, and improving AI management. ISO 23894 covers AI risk management specifically.
Knowledge Check
An organization wants to use an AI model to process patient medical records for diagnosis assistance. According to best practices, which deployment model is MOST appropriate?
Medical records are protected by HIPAA and contain highly sensitive data. A private model hosted within the organization's secured infrastructure provides the greatest control over data handling, security, and compliance. Public models introduce third-party risk, data sovereignty concerns, and potential regulatory violations.
Knowledge Check
When evaluating a third-party AI vendor, which of the following is the MOST critical data sovereignty consideration?
Data sovereignty requires that data remain within certain geographic boundaries. The most critical consideration is where the vendor physically processes and stores data and whether it crosses national boundaries during processing, as this directly affects regulatory compliance.
🎉
Day 20 Complete
"AI compliance is a layered system — international frameworks like the OECD principles set the direction, regulations like the EU AI Act set enforceable rules, standards like ISO 42001 and NIST AI RMF provide structured approaches, and corporate policies translate everything into daily operations."
Next Lesson
Exam Strategy, Timing, and Full Practice Exam