All Lessons Course Details All Courses Enroll
Courses/ AIGP Certification Prep/ Day 2
Day 2 of 30

AI Risks and Harms — A Governance Taxonomy

⏱ 15 min 📊 Beginner AIGP Certification Prep

Yesterday you learned why AI needs its own governance. Today, you'll learn to categorize the risks AI creates — because you can't govern what you can't classify.

The AIGP exam tests your ability to distinguish between different types of AI risks and map them to appropriate governance responses. Let's build your risk taxonomy.

AI risk taxonomy showing individual, organizational, and societal risk categories
AI risks cascade across three levels: individual harms, organizational exposure, and societal impact.

Harms to Individuals and Groups

AI can harm people directly in several ways:

Discrimination and bias — AI systems that systematically disadvantage people based on race, gender, age, disability, or other protected characteristics. A hiring algorithm that screens out women. A lending model that charges higher rates to minorities.

Privacy violations — AI trained on personal data without consent. Facial recognition used for mass surveillance. Generative AI that memorizes and reproduces private information from training data.

Safety risks — Autonomous vehicles causing accidents. Medical AI providing incorrect diagnoses. AI-controlled systems making decisions that endanger physical safety.

Manipulation — AI-generated deepfakes used for fraud. Recommendation algorithms designed to maximize engagement through psychological manipulation. AI-powered social engineering attacks.

Environmental harm — The massive computational resources required to train large AI models contribute to carbon emissions and energy consumption.

Knowledge Check
A hiring algorithm systematically ranks candidates from one demographic lower. This is primarily an example of:
This is a bias and discrimination risk — the algorithm produces systematically unfair outcomes based on demographic characteristics. While it could also create operational and legal risks for the organization, the primary harm category is bias and discrimination against individuals.

Organizational Risks

AI creates specific risks for the organizations that build or deploy it:

Legal liability — Violations of anti-discrimination laws, privacy regulations, consumer protection statutes, or the EU AI Act can result in lawsuits, fines, and enforcement actions.

Reputational damage — A single AI failure can become a global news story. The reputational cost often exceeds the legal penalties.

Financial risk — Beyond fines, AI failures can cause direct financial losses through incorrect automated decisions, trading errors, or business disruption.

Operational risk — Over-reliance on AI systems that fail, produce drift, or become unavailable. Shadow AI use by employees introducing ungoverned tools into workflows.

Intellectual property risk — AI trained on copyrighted material. Ownership questions around AI-generated content. Trade secrets inadvertently disclosed to AI tools.

Knowledge Check
An employee pastes confidential client contracts into a public generative AI tool to summarize them. Which organizational risk category is MOST directly implicated?
The most direct risk is IP and confidentiality — confidential client information was disclosed to a third-party AI service. While this could also create reputational and operational risks, the primary and most immediate risk is the unauthorized disclosure of confidential information.

Societal Risks

Beyond individuals and organizations, AI poses risks to society at large:

Democratic processes — AI-generated disinformation, deepfakes targeting elections, and algorithmic amplification of polarizing content can undermine democratic institutions.

Labor displacement — AI automation may eliminate jobs faster than new ones are created, particularly affecting certain industries and demographics disproportionately.

Concentration of power — AI development requires massive resources, potentially concentrating technological and economic power in a small number of organizations or nations.

Misalignment and loss of control — As AI systems become more capable, the risk of systems pursuing goals that diverge from human intentions grows. This is the "alignment problem."

Knowledge Check
An AI-powered news recommendation system consistently amplifies sensational and divisive content because it maximizes user engagement. This is primarily an example of which societal risk?
Algorithmic amplification of divisive content directly threatens democratic processes and social cohesion. The system isn't displacing labor or concentrating power — it's actively undermining informed public discourse by optimizing for engagement over accuracy and balance.

Mapping Risks to Governance Responses

The AIGP exam expects you to connect risk categories to appropriate governance actions:

Bias and discrimination risks → Fairness testing, bias audits, representative training data, demographic parity monitoring

Privacy risks → Data protection impact assessments, purpose limitation policies, consent management, anonymization

Safety risks → Red teaming, adversarial testing, human oversight requirements, kill switches

Operational risks → Monitoring frameworks, drift detection, fallback procedures, incident response plans

IP risks → Acceptable use policies, data classification, contractual protections, access controls

This mapping is foundational — you'll use it throughout the rest of this course.

Final Check
Your organization identifies that its AI lending model may have disparate impact across racial groups. Which governance response is MOST appropriate as the first step?
A bias audit is the correct first step — you need to measure the actual impact before deciding on a response. Deploying with a disclaimer doesn't address the harm. Removing features may not eliminate proxy discrimination. Shutting down immediately may be disproportionate if the bias is minor and correctable.
🎯
Day 2 Complete
"AI risks cascade across three levels — individual harms, organizational exposure, and societal impact. You can't govern risks you haven't classified, so build your taxonomy first."
Next Lesson
Ethical, Responsible, and Trustworthy AI