The go/no-go decision is the most consequential governance moment in the AI lifecycle. It's the final gate between development and production. The AIGP exam frequently presents scenarios requiring you to make or evaluate this decision.
A structured go/no-go framework evaluates readiness across five dimensions:
1. Legal and Compliance Readiness
- All applicable regulations identified and addressed
- Required assessments completed (DPIA, FRIA, AIA)
- Lawful basis established for data processing
- Contract provisions in place for third-party components
2. Technical Readiness
- Model performance meets defined thresholds
- Fairness metrics within tolerance
- Robustness and security testing completed
- Infrastructure ready for production workload
3. Governance and Documentation Readiness
- Technical documentation complete (including Annex IV if applicable)
- Model card and datasheets finalized
- Risk assessment documented with approved residual risk level
- Version control and audit trail in place
4. Operational Readiness
- Monitoring framework defined with KPIs and alert thresholds
- Human oversight personnel identified and trained
- Incident response plan specific to this AI system
- Rollback procedures tested
5. Stakeholder Readiness
- Affected stakeholders identified and notified
- Transparency requirements met (disclosures, explanations)
- Feedback mechanisms in place
- Support team trained on AI system behavior
Who has the authority to approve deployment? The governance framework must define:
Standard approvals — Low-risk AI systems may be approved by the model owner with technical sign-off from the development team.
Elevated approvals — Medium-risk AI systems require additional sign-off from the AI governance team or risk committee.
Executive approvals — High-risk AI systems require approval from the AI governance officer, legal review, and potentially board-level notification.
Escalation triggers:
- Test results that fail to meet defined thresholds
- Unresolved legal or compliance questions
- Disagreement between technical and governance teams
- Novel use cases without precedent
- External stakeholder concerns
Not every deployment decision is binary. Consider graduated approaches:
Pilot program — Deploy to a limited user group with enhanced monitoring. Validate real-world performance before full rollout.
Limited rollout — Deploy in a specific geography, business unit, or use case before expanding.
Staged deployment — Gradually increase the AI's decision-making authority. Start with AI-assisted (human decides), progress to AI-driven (human reviews), then to AI-autonomous (human monitors).
Shadow mode — The AI runs in production but its decisions are not actioned. Compare AI decisions against actual human decisions to validate alignment.
These approaches reduce deployment risk while enabling real-world validation.
Every go/no-go decision must be documented:
- Decision — Go, no-go, or conditional deployment
- Rationale — Why this decision was made
- Conditions — Any conditions attached to the deployment (monitoring requirements, review dates, scope limitations)
- Residual risks — Known risks accepted and why
- Approvers — Who approved, their authority level, date
- Review date — When the deployment will be reassessed
This documentation is critical for:
- Regulatory compliance (demonstrating governance process)
- Organizational accountability (who decided and why)
- Post-incident analysis (understanding the context of the deployment decision)