AI doesn't get a free pass from civil rights laws. When an algorithm discriminates, the same laws that apply to human decision-makers apply to the organization that deployed the AI. Today you'll learn the specific legal frameworks that govern AI in employment, lending, and housing.
Title VII of the Civil Rights Act (1964) prohibits employment discrimination based on race, color, religion, sex, or national origin. Two legal theories apply to AI:
Disparate treatment — Intentionally using a protected characteristic in decision-making. An AI that explicitly considers race in hiring decisions violates Title VII under disparate treatment.
Disparate impact — A facially neutral practice that disproportionately affects a protected group. This is where most AI cases arise. An AI hiring tool that screens for "cultural fit" may systematically exclude certain demographics even without explicitly using protected characteristics.
Key point: Under disparate impact theory, intent doesn't matter. If the AI produces discriminatory outcomes, the employer bears the burden of proving the practice is "job-related and consistent with business necessity."
EEOC guidance (2023) confirmed: employers are liable for discriminatory AI hiring tools even if the tool was developed by a third-party vendor.
The ADA prohibits discrimination against individuals with disabilities. AI intersects with ADA in several ways:
Assessment tools — AI video interview platforms that analyze facial expressions, tone, or body language may discriminate against candidates with disabilities affecting these behaviors (autism, facial paralysis, speech impediments).
Reasonable accommodation — Employers must provide reasonable accommodations in AI-driven processes. If an AI assessment isn't accessible, the employer must offer an alternative.
Medical inquiries — The ADA restricts disability-related inquiries before a job offer. AI tools that infer health conditions from behavioral data may constitute prohibited medical inquiries.
Equal Credit Opportunity Act (ECOA) — Prohibits discrimination in lending based on race, color, religion, national origin, sex, marital status, age, or receipt of public assistance. AI lending models that produce disparate impact violate ECOA.
Fair Housing Act (FHA) — Prohibits discrimination in housing-related transactions. AI tools used for tenant screening, mortgage underwriting, or property advertising must comply.
Key case: The DOJ and HUD have brought enforcement actions against AI-driven advertising platforms that allowed discriminatory targeting in housing ads.
New York City's Local Law 144 (effective 2023) is a landmark regulation specifically targeting AI in employment:
- Requires annual bias audits of automated employment decision tools (AEDTs)
- Audits must assess disparate impact across race/ethnicity and sex
- Results must be publicly posted on the employer's website
- Candidates must receive notice that an AEDT is being used and be offered an alternative process
- Applies to tools that "substantially assist or replace" human decision-making in hiring or promotion
This law is a model for other jurisdictions and is likely to appear on the AIGP exam as an example of AI-specific employment regulation.