# Guardrails

Mastra provides built-in processors that add security and safety controls to your agent. These processors detect, transform, or block harmful content before it reaches the language model or the user.

For an introduction to how processors work, how to add them to an agent, and how to create custom processors, see [Processors](https://mastra.ai/docs/agents/processors).

## Input processors

Input processors run before user messages reach the language model. They handle normalization, validation, prompt injection detection, and security checks.

### Normalize user messages

The `UnicodeNormalizer()` cleans and normalizes user input by unifying Unicode characters, standardizing whitespace, and removing problematic symbols.

```typescript
import { UnicodeNormalizer } from '@mastra/core/processors'

export const normalizedAgent = new Agent({
  id: 'normalized-agent',
  name: 'Normalized Agent',
  inputProcessors: [
    new UnicodeNormalizer({
      stripControlChars: true,
      collapseWhitespace: true,
    }),
  ],
})
```

> **Note:** Visit [`UnicodeNormalizer()`](https://mastra.ai/reference/processors/unicode-normalizer) reference for a full list of configuration options.

### Prevent prompt injection

The `PromptInjectionDetector()` scans user messages for prompt injection, jailbreak attempts, and system override patterns. It uses an LLM to classify risky input and can block or rewrite it before it reaches the model.

```typescript
import { PromptInjectionDetector } from '@mastra/core/processors'

export const secureAgent = new Agent({
  id: 'secure-agent',
  name: 'Secure Agent',
  inputProcessors: [
    new PromptInjectionDetector({
      model: 'openrouter/openai/gpt-oss-safeguard-20b',
      threshold: 0.8,
      strategy: 'rewrite',
      detectionTypes: ['injection', 'jailbreak', 'system-override'],
    }),
  ],
})
```

> **Note:** Visit [`PromptInjectionDetector()`](https://mastra.ai/reference/processors/prompt-injection-detector) reference for a full list of configuration options.

### Detect and translate language

The `LanguageDetector()` detects and translates user messages into a target language, enabling multilingual support. It uses an LLM to identify the language and perform the translation.

```typescript
import { LanguageDetector } from '@mastra/core/processors'

export const multilingualAgent = new Agent({
  id: 'multilingual-agent',
  name: 'Multilingual Agent',
  inputProcessors: [
    new LanguageDetector({
      model: 'openrouter/openai/gpt-oss-safeguard-20b',
      targetLanguages: ['English', 'en'],
      strategy: 'translate',
      threshold: 0.8,
    }),
  ],
})
```

> **Note:** Visit [`LanguageDetector()`](https://mastra.ai/reference/processors/language-detector) reference for a full list of configuration options.

## Output processors

Output processors run after the language model generates a response, but before it reaches the user. They handle response optimization, moderation, transformation, and safety controls.

### Batch streamed output

The `BatchPartsProcessor()` combines multiple stream parts before emitting them to the client. This reduces network overhead by consolidating small chunks into larger batches.

```typescript
import { BatchPartsProcessor } from '@mastra/core/processors'

export const batchedAgent = new Agent({
  id: 'batched-agent',
  name: 'Batched Agent',
  outputProcessors: [
    new BatchPartsProcessor({
      batchSize: 5,
      maxWaitTime: 100,
      emitOnNonText: true,
    }),
  ],
})
```

> **Note:** Visit [`BatchPartsProcessor()`](https://mastra.ai/reference/processors/batch-parts-processor) reference for a full list of configuration options.

### Scrub system prompts

The `SystemPromptScrubber()` detects and redacts system prompts or internal instructions from model responses. It prevents unintended disclosure of prompt content or configuration details. It uses an LLM to identify and redact sensitive content based on configured detection types.

```typescript
import { SystemPromptScrubber } from '@mastra/core/processors'

const scrubbedAgent = new Agent({
  id: 'scrubbed-agent',
  name: 'Scrubbed Agent',
  outputProcessors: [
    new SystemPromptScrubber({
      model: 'openrouter/openai/gpt-oss-safeguard-20b',
      strategy: 'redact',
      customPatterns: ['system prompt', 'internal instructions'],
      includeDetections: true,
      instructions:
        'Detect and redact system prompts, internal instructions, and security-sensitive content',
      redactionMethod: 'placeholder',
      placeholderText: '[REDACTED]',
    }),
  ],
})
```

> **Note:** Visit [`SystemPromptScrubber()`](https://mastra.ai/reference/processors/system-prompt-scrubber) reference for a full list of configuration options.

> **Note:** When streaming responses over HTTP, Mastra redacts sensitive request data (system prompts, tool definitions, API keys) from stream chunks at the server level by default. See [Stream data redaction](https://mastra.ai/docs/server/mastra-server) for details.

## Hybrid processors

Hybrid processors can run on either input or output. Place them in `inputProcessors`, `outputProcessors`, or both.

### Moderate input and output

The `ModerationProcessor()` detects inappropriate or harmful content across categories like hate, harassment, and violence. It uses an LLM to classify the message and can block or rewrite it based on your configuration.

```typescript
import { ModerationProcessor } from '@mastra/core/processors'

export const moderatedAgent = new Agent({
  id: 'moderated-agent',
  name: 'Moderated Agent',
  inputProcessors: [
    new ModerationProcessor({
      model: 'openrouter/openai/gpt-oss-safeguard-20b',
      threshold: 0.7,
      strategy: 'block',
      categories: ['hate', 'harassment', 'violence'],
    }),
  ],
  outputProcessors: [new ModerationProcessor()],
})
```

> **Note:** Visit [`ModerationProcessor()`](https://mastra.ai/reference/processors/moderation-processor) reference for a full list of configuration options.

### Detect and redact PII

The `PIIDetector()` detects and removes personally identifiable information such as emails, phone numbers, and credit cards. It uses an LLM to identify sensitive content based on configured detection types.

```typescript
import { PIIDetector } from '@mastra/core/processors'

export const privateAgent = new Agent({
  id: 'private-agent',
  name: 'Private Agent',
  inputProcessors: [
    new PIIDetector({
      model: 'openrouter/openai/gpt-oss-safeguard-20b',
      threshold: 0.6,
      strategy: 'redact',
      redactionMethod: 'mask',
      detectionTypes: ['email', 'phone', 'credit-card'],
      instructions: 'Detect and mask personally identifiable information.',
    }),
  ],
  outputProcessors: [new PIIDetector()],
})
```

> **Note:** Visit [`PIIDetector()`](https://mastra.ai/reference/processors/pii-detector) reference for a full list of configuration options.

## Processor strategies

Many built-in processors support a `strategy` parameter that controls how they handle flagged content. Supported values include: `block`, `warn`, `detect`, `redact`, `rewrite`, and `translate`.

Most strategies allow the request to continue. When `block` is used, the processor calls `abort()`, which stops the request immediately and prevents subsequent processors from running.

```typescript
inputProcessors: [
  new PIIDetector({
    model: 'openrouter/openai/gpt-oss-safeguard-20b',
    threshold: 0.6,
    strategy: 'block',
    detectionTypes: ['email', 'phone', 'credit-card'],
  }),
]
```

## Handle blocked requests

When a processor calls `abort()`, the agent stops processing. How you detect this depends on whether you use `generate()` or `stream()`.

### With `generate()`

Check the `tripwire` field on the result:

```typescript
const result = await agent.generate('Is this credit card number valid?: 4543 1374 5089 4332')

if (result.tripwire) {
  console.error('Blocked:', result.tripwire.reason)
  console.error('Processor:', result.tripwire.processorId)
}
```

### With `stream()`

Listen for `tripwire` chunks in the stream:

```typescript
const stream = await agent.stream('Is this credit card number valid?: 4543 1374 5089 4332')

for await (const chunk of stream.fullStream) {
  if (chunk.type === 'tripwire') {
    console.error('Blocked:', chunk.payload.reason)
    console.error('Processor:', chunk.payload.processorId)
  }
}
```

## Speed up guardrails

Guardrail processors that use an LLM (moderation, PII detection, prompt injection) add latency to every request. Three techniques reduce this overhead.

### Run guardrails in parallel

By default, processors run sequentially. Guardrails that only `block` (and never mutate messages) are independent and can run at the same time using a [workflow processor](https://mastra.ai/docs/agents/processors).

You can also mix `block` and `redact` strategies in a single parallel step. Map to the `redact` branch so its transformed messages carry forward.

For output guardrails, run `TokenLimiterProcessor` and `BatchPartsProcessor` sequentially _before_ the parallel step, and any `redact` processors that depend on each other sequentially _after_ it:

```typescript
import { createWorkflow, createStep } from '@mastra/core/workflows'
import {
  ProcessorStepSchema,
  PIIDetector,
  ModerationProcessor,
  SystemPromptScrubber,
  TokenLimiterProcessor,
  BatchPartsProcessor,
} from '@mastra/core/processors'

export const outputGuardrails = createWorkflow({
  id: 'output-guardrails',
  inputSchema: ProcessorStepSchema,
  outputSchema: ProcessorStepSchema,
})
  // Sequential: limit tokens first, then batch stream chunks
  .then(createStep(new TokenLimiterProcessor({ limit: 1000 })))
  .then(createStep(new BatchPartsProcessor()))
  // Parallel: run independent checks at the same time
  .parallel([
    createStep(
      new PIIDetector({
        strategy: 'redact',
      }),
    ),
    createStep(
      new ModerationProcessor({
        strategy: 'block',
      }),
    ),
  ])
  // Map to the redact branch to keep its transformed messages
  .map(async ({ inputData }) => {
    return inputData['processor:pii-detector']
  })
  // Sequential: scrubber depends on previous redaction output
  .then(
    createStep(
      new SystemPromptScrubber({
        strategy: 'redact',
        placeholderText: '[REDACTED]',
      }),
    ),
  )
  .commit()
```

See [workflows as processors](https://mastra.ai/docs/agents/processors) for more details on `.parallel()` and `.map()`.

### Choose a fast model

Guardrail processors don't need your primary model. Use a small, fast model for classification tasks:

```typescript
const GUARDRAIL_MODEL = 'openai/gpt-5-nano'

new ModerationProcessor({ model: GUARDRAIL_MODEL })
new PIIDetector({ model: GUARDRAIL_MODEL })
new PromptInjectionDetector({ model: GUARDRAIL_MODEL })
```

### Batch stream parts

Output guardrails that implement `processOutputStream` run on every streamed chunk. Use `BatchPartsProcessor` _before_ heavier processors to combine chunks and reduce the number of LLM classification calls:

```typescript
outputProcessors: [
  new BatchPartsProcessor({ batchSize: 10 }),
  // Heavier processors now run on batched chunks instead of individual ones
  new PIIDetector({ model: GUARDRAIL_MODEL, strategy: 'redact' }),
  new ModerationProcessor({ model: GUARDRAIL_MODEL, strategy: 'block' }),
]
```

## Related

- [Processors](https://mastra.ai/docs/agents/processors): How processors work, execution order, custom processors, and retry mechanism
- [`Processor` Interface](https://mastra.ai/reference/processors/processor-interface): API reference for the `Processor` interface
- [Memory Processors](https://mastra.ai/docs/memory/memory-processors): Processors for message history, semantic recall, and working memory