Final Exam Prep — Strategy, Timing, and Practice Exam
⏱ 25 min📊 AdvancedAIGP Certification Prep
Congratulations — you've completed the entire AIGP curriculum. Today is about exam strategy, practice, and building your personalized study sheet. Let's make sure you pass on your first attempt.
Your one-page study reference — review this in the coffee shop before the exam.
Exam Day Logistics
Pearson VUE test centers:
- Arrive 30 minutes early with two forms of valid ID
- Government-issued photo ID required (passport, driver's license)
- No personal items in the testing room (watches, phones, notes)
- Lockers provided for personal belongings
OnVUE remote proctoring:
- Stable internet connection required
- Webcam and microphone must be functional
- Clear desk — no notes, second monitors, or other materials
- Room must be private with a closed door
- ID verification via webcam
Exam structure:
- 100 questions total (85 scored, 15 unscored pilot questions — you won't know which)
- 2 hours 45 minutes (165 minutes)
- Optional 15-minute break after approximately question 50
- All questions are multiple choice with 4 options
- ~30% of questions tied to case study scenarios
Time Management Strategy
You have 165 minutes for 100 questions = approximately 1 minute 39 seconds per question.
Recommended approach:
First pass (90 minutes): Answer every question. If you know the answer, select it and move on. If you're unsure, make your best guess and flag it for review.
Second pass (45 minutes): Return to flagged questions. With a fresh perspective and the context of other questions, you may find the answer clearer.
Final review (30 minutes): Review remaining flagged questions. Check for any questions you may have misread. Verify your answers for case-study questions (where one misunderstanding can affect multiple answers).
Golden rule: Never leave a question unanswered. There's no penalty for guessing. Eliminate what you can and choose from the remaining options.
Answer Elimination Strategies
IAPP uses specific patterns in their distractors. Knowing these helps you eliminate wrong answers:
"Always" and "never" answers — The AIGP exam rarely tests absolutes. If an option says "always required" or "never applicable," it's usually wrong. Governance is context-dependent.
Extreme actions — Options suggesting extreme responses (shut everything down, fire the team, ban all AI) are usually wrong. The correct answer typically involves proportionate, measured governance action.
Technically correct but governance-incomplete — An answer might be factually accurate but miss the governance point. Example: "Improve model accuracy" is technically helpful but may not be the governance response the question asks for.
Order of operations — Many questions test what you should do FIRST. The correct answer is usually assessment/investigation before action. Don't jump to solutions before understanding the problem.
Practice Exam — Domain I (Questions 1-5)
An organization develops an AI ethics charter listing principles of fairness, transparency, and accountability, but assigns no roles, creates no processes, and establishes no metrics. What maturity level has it reached?
An ethics charter with principles but no roles, processes, or metrics represents ethical AI aspirations only. Responsible AI requires operationalization (processes, controls, roles). Trustworthy AI requires measurable, verifiable outcomes. This organization is at the earliest maturity stage.
Practice Exam — Domain I
A hybrid AI governance operating model means:
A hybrid model combines central standard-setting (consistency) with business unit implementation (agility). This is considered best practice for organizations with diverse AI use cases, balancing governance consistency with operational flexibility.
Practice Exam — Domain I
Under the v2.1 BoK update, which governance area received increased emphasis?
BoK v2.1 added specific performance indicators for data governance and IP policy (I.C.2) and third-party AI risk (I.C.3). These are new emphasis areas that reflect the growing importance of data rights and supply chain risk in AI governance.
Practice Exam — Domain I
An organization's risk tolerance statement says: "No AI system may be deployed with a fairness gap exceeding 5% across demographic groups." This is:
Risk tolerance defines specific, quantifiable thresholds. "5% fairness gap" is measurable and auditable. Risk appetite is a broader statement of willingness to accept risk. Risk capacity is the maximum risk the organization can absorb. Ethical principles are value-based, not quantified.
Practice Exam — Domain I
A model owner in a governance framework is primarily responsible for:
The model owner is the single point of accountability for an AI system across its entire lifecycle. Data scientists build models (responsible), executives set strategy, and auditors conduct audits. The model owner owns the deployment decision, ongoing monitoring, and incident response.
Practice Exam — Domain II (Questions 6-10)
GDPR Article 22 applies to AI systems that:
Article 22 has specific triggers: the decision must be (1) solely automated (no meaningful human intervention) and (2) produce legal effects or similarly significant effects. Not all AI systems meet both criteria. GDPR and the EU AI Act are separate regulations with different triggers.
Practice Exam — Domain II
Under the EU AI Act, who bears the PRIMARY obligation to ensure high-risk AI systems meet quality, safety, and compliance requirements before market placement?
Providers bear the heaviest obligations for high-risk AI systems — risk management (Art. 9), data governance (Art. 10), documentation (Art. 11), transparency (Art. 13), human oversight design (Art. 14), and conformity assessment. Deployers, importers, and authorities have separate, lighter obligations.
Practice Exam — Domain II
The NIST AI RMF's "Govern" function is unique because it:
Govern is the overarching function — it establishes the organizational foundation (policies, roles, culture, accountability) that enables Map, Measure, and Manage to operate effectively. It's not a one-time activity, not limited to technical governance, and not government-specific.
Practice Exam — Domain II
An AI system classified as having "systemic risk" under the EU AI Act's GPAI provisions must:
GPAI models with systemic risk (presumed at >10^25 FLOPs training compute) face additional obligations beyond standard GPAI requirements. They're not banned, not limited to government, and not required to be open-source.
Practice Exam — Domain II
When mapping across the EU AI Act, NIST AI RMF, and ISO 42001, the PRIMARY benefit of creating a compliance matrix is:
A compliance matrix enables "comply once, satisfy many" by identifying where frameworks share requirements. It doesn't eliminate separate audits, guarantee universal legal compliance, or reduce the actual number of requirements — it makes compliance management more efficient.
Practice Exam — Domain III (Questions 11-15)
At which stage of the AI lifecycle does governance intervention provide the MOST cost-effective risk reduction?
"Shift left" governance is most cost-effective — catching issues at problem formulation prevents entire categories of downstream problems. Fixing a poorly defined use case costs orders of magnitude less than fixing a deployed biased model.
Practice Exam — Domain III
Two data annotators labeling customer complaints as "urgent" or "non-urgent" agree on only 55% of labels. This indicates:
55% agreement is barely above chance for a binary classification. This indicates unclear labeling guidelines, not annotator incompetence or dataset size issues. The solution is to revise guidelines for clarity, not replace annotators.
Practice Exam — Domain III
A red team for a generative AI system should be:
Red teams must be independent (to avoid developer blind spots) and diverse (to test across cultural, linguistic, and demographic perspectives). Red teaming is a proactive governance activity, not a regulatory response.
Practice Exam — Domain III
Under the EU AI Act, a deployer of a high-risk AI system substantially modifies the system. The deployer now:
Substantial modification triggers a role change — the deployer becomes a provider and assumes ALL provider obligations, including conformity assessment, documentation, and risk management. A notified body assessment is only required for specific AI categories, not all high-risk AI.
Practice Exam — Domain III
An AI Impact Assessment identifies moderate residual risk after mitigation. The governance team should:
Residual risk is normal — no AI system has zero risk. The governance decision is whether the residual risk is within tolerance. This decision must be deliberate, documented, and made by authorized personnel.
Practice Exam — Domain IV (Questions 16-20)
Concept drift in a deployed AI system is BEST described as:
Concept drift occurs when the real-world patterns the model learned during training change over time. The model itself doesn't change — the world does. This differs from data drift (input distribution changes) and scope creep (unintended use).
Practice Exam — Domain IV
Human-on-the-loop oversight means:
HOTL = AI operates autonomously, human monitors and can intervene. HITL = human reviews every decision. HOVL = strategic oversight only. Full automation = no human involvement.
Practice Exam — Domain IV
Under the EU AI Act, a serious incident involving a high-risk AI system must be reported to authorities within:
The EU AI Act requires initial reports of serious incidents to be filed with the relevant market surveillance authority within 72 hours of becoming aware of the incident.
Practice Exam — Domain IV
An AI system is retrained on new data. The governance framework should treat this as:
Retraining creates a new model with potentially different performance, fairness characteristics, and error patterns. It requires its own governance review — testing, documentation updates, and approval — proportionate to the AI system's risk level.
Practice Exam — Domain IV
Which of the following is the MOST critical governance element to have in place before deploying any AI system to production?
A monitoring framework is the minimum governance requirement for production deployment. Without monitoring, you cannot detect performance degradation, fairness issues, drift, or incidents. Marketing, detailed architecture, and user surveys are valuable but not governance-critical for deployment.
- EU AI Act penalties: 35M/7% (prohibited), 15M/3% (high-risk), 7.5M/1.5% (incorrect info)
- EU AI Act roles: Provider, Deployer, Importer, Distributor
- GPAI systemic risk: 10^25 FLOPs
- NIST AI RMF: Govern, Map, Measure, Manage
- EU HLEG: 7 requirements for trustworthy AI
- Serious incident reporting: 72 hours
Key frameworks: EU AI Act, NIST AI RMF, ISO 42001, OECD AI Principles, GDPR
Key concepts: Risk-based approach, proportionality, shift left governance, documentation throughout lifecycle
🎓
Course Complete — You're Ready!
"You've covered all 4 AIGP exam domains in 30 days. Remember: the exam tests APPLICATION of concepts, not memorization. Think like a governance professional — assess the situation, identify the risk, choose the proportionate response, and document your decision."