The NIST AI RMF is a voluntary framework — but it's the most widely referenced AI risk management standard in the US and increasingly globally. The AIGP exam tests your knowledge of its four core functions and how organizations apply them.
GOVERN — Establish the organizational context, culture, and structures for AI risk management.
- Policies and procedures for AI risk management
- Roles, responsibilities, and accountability
- Organizational risk culture and awareness
- AI risk management integration with enterprise risk management
- Stakeholder engagement and feedback mechanisms
- Diversity, equity, inclusion, and accessibility considerations
MAP — Identify and contextualize AI risks relative to the specific use case.
- Intended use, context, and stakeholders
- Benefits, costs, and risks of the AI system
- Interdependencies and potential cascading effects
- Assumptions, limitations, and potential failure modes
- Risk identification specific to the AI's deployment context
MEASURE — Assess and monitor identified AI risks with appropriate methods.
- Quantitative and qualitative risk metrics
- Fairness and bias metrics
- Performance benchmarks and thresholds
- Third-party testing and evaluation
- Ongoing monitoring against established baselines
MANAGE — Prioritize and act on AI risks based on assessment results.
- Risk treatment: mitigate, transfer, accept, or avoid
- Resource allocation for risk management activities
- Incident response and escalation
- Communication of risk status to stakeholders
- Continuous improvement of risk management
Voluntary — Unlike the EU AI Act, the NIST AI RMF is not legally binding. However, it's increasingly referenced in procurement requirements, industry standards, and as a compliance framework.
Risk-based — The framework is proportionate — organizations apply it based on the level of risk their AI systems pose.
Technology-neutral — Applies to all types of AI systems, not specific technologies.
Lifecycle-oriented — Covers AI risks from design through deployment and retirement.
Iterative — The four functions are meant to be applied continuously, not as a one-time assessment.
Compatible — Designed to work alongside other frameworks (ISO 42001, EU AI Act, sector-specific regulations).
Organizations can create AI RMF profiles to customize the framework for their specific needs:
Current Profile — Describes the current state of AI risk management practices.
Target Profile — Describes the desired future state.
Gap Analysis — Comparing current vs. target profiles identifies gaps and priorities for improvement.
Profiles can be created for:
- Specific AI use cases (e.g., AI in healthcare vs. AI in marketing)
- Organizational maturity levels (e.g., early-stage vs. mature AI programs)
- Sector-specific requirements (e.g., financial services AI profile)
The AI RMF Playbook provides practical guidance for each function:
- Suggested actions for implementing each subcategory
- Transparency notes about what information to document
- References to relevant standards and resources
The Playbook is not prescriptive — it offers suggested actions that organizations adapt to their context. This is consistent with NIST's approach across all its frameworks (including the Cybersecurity Framework).