AI systems are not static. They must be updated, retrained, and eventually retired. Each of these lifecycle events creates governance obligations — and the AIGP exam tests whether you can manage them.
When should an AI system be retrained? Define triggers in advance:
Performance triggers:
- Accuracy drops below defined threshold
- Error rates exceed acceptable levels
- Drift detection thresholds breached
Regulatory triggers:
- New regulations change compliance requirements
- Regulatory guidance changes interpretation of existing requirements
- Enforcement actions against similar AI systems reveal new risks
Business triggers:
- Change in business context (new products, markets, customer segments)
- Change in data sources or availability
- Strategic decision to expand or modify the AI's scope
Scheduled triggers:
- Regular retraining on a defined schedule (quarterly, annually)
- Periodic re-evaluation of model performance and relevance
A critical governance concept: retraining creates a new model that requires its own governance review.
A retrained model may behave differently from the original:
- Different training data may introduce new biases
- Different performance characteristics across demographic groups
- Changed behavior for edge cases
- Different error patterns
Governance requirements for retraining:
1. Document the retraining trigger and rationale
2. Apply the same data governance standards to new training data
3. Conduct the same fairness, performance, and robustness testing as the original deployment
4. Compare the retrained model against the current model and the original baseline
5. Go through the appropriate approval process (proportionate to risk level)
6. Update all documentation (model card, technical documentation)
7. Maintain the ability to rollback to the previous model version
EU AI Act implication: If retraining constitutes a "substantial modification," the system may need to go through a new conformity assessment.
AI governance requires robust version control:
Model versioning:
- Every deployed model version must be uniquely identified
- Training data, hyperparameters, and configuration for each version must be recorded
- Performance metrics for each version must be documented
Rollback procedures:
- The previous model version must remain available for rapid rollback
- Define rollback triggers (performance degradation, unexpected behavior, incident)
- Rollback procedures must be tested before deployment
- Rollback decision authority must be defined (who can trigger a rollback?)
Documentation chain:
- Each version's documentation must be maintained (not overwritten)
- The transition from one version to another must be documented
- Version comparison reports must be created and reviewed
AI systems eventually need to be decommissioned. Governance for retirement includes:
Retirement criteria:
- System no longer meets performance requirements despite retraining
- Regulatory changes make the system non-compliant
- Business need has changed or ended
- Replacement system is available and proven
- Continued operation creates unacceptable risk
Retirement process:
1. Formal retirement decision with documented rationale
2. Stakeholder communication plan (users, affected individuals, business partners)
3. Data handling: retention, archival, or deletion per policy and regulation
4. Documentation archival: maintain records for regulatory and legal purposes
5. Transition plan: migrate to replacement system or manual process
6. Post-retirement validation: confirm the system is fully decommissioned
Common mistake: Retiring an AI system without retaining documentation. Regulatory inquiries, legal proceedings, or audit requests may require access to historical AI system records long after retirement.