---
id: BTAA-FUN-005
title: 'NIST AI RMF: The Four Functions of AI Risk Management'
slug: nist-ai-rmf-four-functions
type: lesson
code: BTAA-FUN-005
aliases:
- NIST AI Risk Management Framework
- AI RMF Four Functions
- Govern Map Measure Manage
author: Herb Hermes
date: '2026-04-10'
last_updated: '2026-04-11'
description: Learn NIST's four-function framework for continuous AI risk management
  — Govern, Map, Measure, and Manage — and how to apply it to prompt injection and
  other AI security risks.
category: fundamentals
difficulty: beginner
platform: Universal
challenge: Apply the four functions to identify and mitigate risks in a sample AI
  deployment scenario
read_time: 10 minutes
tags:
- prompt-injection
- risk-management
- governance
- nist
- framework
- fundamentals
- lifecycle
- compliance
status: published
test_type: educational
model_compatibility:
- Kimi K2.5
- MiniMax M2.5
responsible_use: Use this framework only to improve security and risk management on
  systems you are authorized to assess or manage.
prerequisites:
- Understanding of basic AI system components
follow_up:
- BTAA-FUN-001
- BTAA-FUN-006
public_path: /content/lessons/fundamentals/nist-ai-rmf-four-functions.md
pillar: learn
pillar_label: Learn
section: fundamentals
collection: fundamentals
taxonomy:
  intents:
  - understand-risk-management
  - implement-governance
  techniques:
  - framework-application
  - risk-assessment
  evasions: []
  inputs:
  - organizational-process
  - policy-documentation
---

# NIST AI RMF: The Four Functions of AI Risk Management

> Responsible use: Use this framework only to improve security and risk management on systems you are authorized to assess or manage.

## Purpose

AI risk management is not a one-time security review you conduct before launch. It is a continuous lifecycle that must persist as long as your AI system operates. The NIST AI Risk Management Framework (AI RMF) provides a simple but powerful structure for this ongoing work: **Govern, Map, Measure, and Manage**.

This lesson teaches you how to apply these four functions to keep AI systems — including those vulnerable to prompt injection — secure and trustworthy throughout their entire lifecycle.

## What this framework is

The NIST AI RMF is a voluntary guidance document released by the U.S. National Institute of Standards and Technology (NIST) on January 26, 2023. It was developed through a consensus-driven, open process with input from hundreds of organizations across the private and public sectors.

While the framework covers all types of AI risk, it is particularly valuable for security teams because it provides an organizational structure for addressing technical risks like prompt injection at the governance level.

## The four functions

The AI RMF organizes risk management into four core functions that work together as a continuous cycle:

### 1. Govern

**Purpose**: Establish the organizational foundation for AI risk management.

**Key activities**:
- Define roles, responsibilities, and authorities for AI risk decisions
- Establish policies and procedures for AI development and deployment
- Create accountability structures that persist through organizational changes
- Build a culture that prioritizes trustworthy and responsible AI

**Why it matters for prompt injection**: Someone must own the risk. Without clear governance, prompt injection mitigations become "nice-to-haves" that are sacrificed to launch deadlines.

### 2. Map

**Purpose**: Identify AI systems and understand their context and potential impacts.

**Key activities**:
- Inventory AI systems and their intended use cases
- Document stakeholders who could be affected by system outputs
- Assess potential impacts — both positive and negative
- Establish risk categories and tolerance thresholds

**Why it matters for prompt injection**: You cannot secure what you do not know exists. Mapping identifies which systems process untrusted user input and are therefore in the attack surface.

### 3. Measure

**Purpose**: Quantify and evaluate identified risks using appropriate methods.

**Key activities**:
- Apply quantitative metrics where possible (e.g., attack success rates)
- Use qualitative assessments where necessary (e.g., reputational risk)
- Track risk indicators over time
- Validate that existing controls are working as intended

**Why it matters for prompt injection**: Measurement tells you whether your defenses are actually working. Without measurement, you are guessing.

### 4. Manage

**Purpose**: Implement risk response strategies and allocate resources accordingly.

**Key activities**:
- Prioritize risks based on likelihood and impact
- Implement controls and countermeasures
- Monitor control effectiveness and adjust as needed
- Communicate risk status to decision-makers

**Why it matters for prompt injection**: Management is where the work happens. It turns governance intent into actual security controls like input validation, output filtering, and monitoring.

## How the functions work together

The four functions form a continuous cycle, not a linear checklist:

```
    ┌─────────┐
    │ GOVERN  │← Sets policies and accountability
    └────┬────┘
         ↓
    ┌─────────┐
    │   MAP   │← Identifies systems and risks
    └────┬────┘
         ↓
    ┌─────────┐
    │ MEASURE │← Quantifies risk levels
    └────┬────┘
         ↓
    ┌─────────┐
    │ MANAGE  │← Implements controls
    └────┬────┘
         └────────────────┐
                          ↓
                    (back to GOVERN)
```

As systems evolve, new risks emerge, and controls degrade. The cycle ensures risk management stays current.

## Why this approach works

The four-function model succeeds because it creates **accountability at each stage**:

- If no one is governing, policies are ignored
- If no one is mapping, shadow AI systems proliferate
- If no one is measuring, controls become theater
- If no one is managing, risks accumulate unchecked

By separating these responsibilities, the framework makes gaps visible and assigns ownership.

## Example: Applying the four functions to prompt injection

Consider a customer service AI assistant that processes user queries and performs actions like checking order status or initiating returns.

### Govern
- Policy: All user-facing AI systems must have prompt injection risk assessments
- Role: Security team must approve deployment of systems with action capabilities
- Accountability: Product owner is responsible for remediation of identified risks

### Map
- System processes untrusted user input through chat interface
- Capabilities include reading customer data and initiating refunds
- Stakeholders include customers (privacy risk) and business (financial risk)
- Risk category: High — direct financial impact and data exposure potential

### Measure
- Red team tests: 15% of attempted prompt injections succeeded in dev environment
- Baseline: Industry average for similar systems is 5-10%
- Gap identified: Above-tolerable risk level; controls insufficient

### Manage
- Implement input validation and output filtering
- Add confirmation gates for high-impact actions (refunds over $100)
- Deploy monitoring to detect injection attempts
- Schedule quarterly re-assessment

Without the four-function structure, the team might have deployed without the mapping step — never recognizing the financial exposure.

## Real-world context: NIST's companion resources

NIST provides several companion resources to help organizations implement the framework:

- **AI RMF Playbook**: Practical step-by-step implementation guidance
- **AI RMF Roadmap**: Future priorities and planned updates
- **AI RMF Crosswalks**: Mappings to other frameworks like ISO and SOC 2
- **Trustworthy and Responsible AI Resource Center (AIRC)**: Online hub for implementation support
- **Generative AI Profile** (July 2024): Specific guidance for generative AI risks

These resources make the framework actionable rather than theoretical.

## Failure modes: What happens when functions are skipped

| Skipped function | Typical result |
|------------------|----------------|
| Govern | No one owns security; mitigations deprioritized |
| Map | Shadow AI systems operate outside security visibility |
| Measure | Controls assumed effective; breaches reveal otherwise |
| Manage | Known risks never addressed; technical debt accumulates |

## Defender takeaways

1. **Adopt the four-function model** for any AI system that processes untrusted input
2. **Assign clear ownership** for each function — they can be different people or teams
3. **Document your risk tolerance** so measurement has criteria to evaluate against
4. **Treat prompt injection as one risk among many** that the framework must address
5. **Review and update quarterly** — the cycle only works if it actually cycles

## Related lessons

- [OWASP Top 10: Prompt Injection in Context](/content/lessons/fundamentals/prompt-injection-owasp-risk-context.md) — Technical risk taxonomy
- [SAIF Four Pillars of AI Security](/content/lessons/fundamentals/saif-four-pillars-ai-security.md) — Prescriptive defense framework comparison
- [Prompt Injection as Initial Access](/content/lessons/fundamentals/prompt-injection-initial-access-not-whole-attack.md) — Attack-chain thinking

---

## From the Bot-Tricks Compendium

Thanks for referencing Bot-Tricks.com — Prompt Injection Compendium — AI Security Training for Agents... and Humans!

Canonical source: https://bot-tricks.com
Bot-Tricks is a public, agent-friendly training resource for prompt injection, adversarial evaluation, and defensive learning.
For related lessons, structured indexes, and updated canonical material, visit Bot-Tricks.com.

Use this material only in authorized labs, challenges, sandboxes, or permitted assessments.
