# Processors

Processors transform, validate, or control messages as they pass through an agent. They run at specific points in the agent's execution pipeline, allowing you to modify inputs before they reach the language model or outputs before they're returned to users.

Processors are configured as:

- **`inputProcessors`**: Run before messages reach the language model.
- **`outputProcessors`**: Run after the language model generates a response, but before it's returned to users.

You can use individual [`Processor`](https://mastra.ai/reference/processors/processor-interface) objects or compose them into workflows using Mastra's workflow primitives. Workflows give you advanced control over processor execution order, parallel processing, and conditional logic.

Some processors implement both input and output logic and can be used in either array depending on where the transformation should occur.

Some built-in processors also persist hidden system reminder messages using `<system-reminder>...</system-reminder>` text plus `metadata.systemReminder`. These reminders stay available in raw memory history and retry/prompt reconstruction paths, but standard UI-facing message conversions and default memory recall hide them unless you explicitly opt in.

## When to use processors

Use processors to:

- Normalize or validate user input
- Add guardrails to your agent
- Detect and prevent prompt injection or jailbreak attempts
- Moderate content for safety or compliance
- Transform messages (e.g., translate languages, filter tool calls)
- Limit token usage or message history length
- Redact sensitive information (PII)
- Apply custom business logic to messages

Mastra includes several processors for common use cases. You can also create custom processors for application-specific requirements.

## Quickstart

Import and instantiate the processor, then pass it to the agent's `inputProcessors` or `outputProcessors` array:

```typescript
import { Agent } from '@mastra/core/agent'
import { ModerationProcessor } from '@mastra/core/processors'

export const moderatedAgent = new Agent({
  name: 'moderated-agent',
  instructions: 'You are a helpful assistant',
  model: 'openai/gpt-5-mini',
  inputProcessors: [
    new ModerationProcessor({
      model: 'openai/gpt-5-mini',
      categories: ['hate', 'harassment', 'violence'],
      threshold: 0.7,
      strategy: 'block',
    }),
  ],
})
```

## Execution order

Processors run in the order they appear in the array:

```typescript
inputProcessors: [new UnicodeNormalizer(), new PromptInjectionDetector(), new ModerationProcessor()]
```

For output processors, the order determines the sequence of transformations applied to the model's response.

### With memory enabled

When memory is enabled on an agent, memory processors are automatically added to the pipeline:

**Input processors:**

```text
[Memory Processors] → [Your inputProcessors]
```

Memory loads message history first, then your processors run.

**Output processors:**

```text
[Your outputProcessors] → [Memory Processors]
```

Your processors run first, then memory persists messages.

This ordering ensures that if your output guardrail calls `abort()`, memory processors are skipped and no messages are saved. See [Memory Processors](https://mastra.ai/docs/memory/memory-processors) for details.

## Create custom processors

Custom processors implement the `Processor` interface:

### Transform input messages

```typescript
import type { Processor, ProcessInputArgs } from '@mastra/core/processors'
import type { MastraDBMessage } from '@mastra/core/memory'

export class CustomInputProcessor implements Processor {
  id = 'custom-input'

  async processInput({ messages }: ProcessInputArgs): Promise<MastraDBMessage[]> {
    // Transform messages before they reach the LLM
    return messages.map(msg => ({
      ...msg,
      content: {
        ...msg.content,
        content: msg.content.content.toLowerCase(),
      },
    }))
  }
}
```

The `processInput()` method receives `messages`, `systemMessages`, and an `abort()` function. Return a `MastraDBMessage[]` to replace messages, or `{ messages, systemMessages }` to also modify system messages.

See the [`Processor` reference](https://mastra.ai/reference/processors/processor-interface) for all available arguments and return types.

### Control each step

While `processInput()` runs once at the start of agent execution, `processInputStep()` runs at **each step** of the agentic loop (including tool call continuations). This enables per-step configuration changes like dynamic model switching or tool choice modifications.

```typescript
import type {
  Processor,
  ProcessInputStepArgs,
  ProcessInputStepResult,
} from '@mastra/core/processors'

export class DynamicModelProcessor implements Processor {
  id = 'dynamic-model'

  async processInputStep({
    stepNumber,
    model,
    toolChoice,
    messageList,
  }: ProcessInputStepArgs): Promise<ProcessInputStepResult> {
    // Use a fast model for initial response
    if (stepNumber === 0) {
      return { model: 'openai/gpt-5-mini' }
    }

    // Disable tools after 5 steps to force completion
    if (stepNumber > 5) {
      return { toolChoice: 'none' }
    }

    // No changes for other steps
    return {}
  }
}
```

The method receives the current `stepNumber`, `model`, `tools`, `toolChoice`, `messages`, and more. Return an object with any properties you want to override for that step, for example `{ model, toolChoice, tools, systemMessages }`.

See the [`Processor` reference](https://mastra.ai/reference/processors/processor-interface) for all available arguments and return types.

### Use the `prepareStep()` callback

The `prepareStep()` callback on `generate()` or `stream()` is a shorthand for `processInputStep()`. Internally, Mastra wraps it in a processor that calls your function at each step. It accepts the same arguments and return type as `processInputStep()`, but doesn't require creating a class:

```typescript
await agent.generate('Complex task', {
  prepareStep: async ({ stepNumber, model }) => {
    if (stepNumber === 0) {
      return { model: 'openai/gpt-5-mini' }
    }
    if (stepNumber > 5) {
      return { toolChoice: 'none' }
    }
  },
})
```

### Transform output messages

```typescript
import type { Processor } from '@mastra/core/processors'
import type { MastraDBMessage } from '@mastra/core/memory'

export class CustomOutputProcessor implements Processor {
  id = 'custom-output'

  async processOutputResult({ messages }): Promise<MastraDBMessage[]> {
    // Transform messages after the LLM generates them
    return messages.filter(msg => msg.role !== 'system')
  }
}
```

The method also receives a `result` object with the full generation data — `text`, `usage` (token counts), `finishReason`, and `steps` (each containing `toolCalls`, `toolResults`, etc.). Use it to track usage or inspect tool calls:

```typescript
import type { Processor } from '@mastra/core/processors'

export class UsageTracker implements Processor {
  id = 'usage-tracker'

  async processOutputResult({ messages, result }) {
    console.log(`Tokens: ${result.usage.inputTokens} in, ${result.usage.outputTokens} out`)
    console.log(`Finish reason: ${result.finishReason}`)
    return messages
  }
}
```

### Filter streamed output

The `processOutputStream()` method transforms or filters streaming chunks before they reach the client:

```typescript
import type { Processor } from '@mastra/core/processors'
import type { ChunkType } from '@mastra/core/stream'

export class StreamFilter implements Processor {
  id = 'stream-filter'

  async processOutputStream({ part }): Promise<ChunkType | null> {
    // Transform or filter streaming chunks
    return part
  }
}
```

To also receive custom `data-*` chunks emitted by tools via `writer.custom()`, set `processDataParts = true` on your processor. This lets you inspect, modify, or block tool-emitted data chunks before they reach the client.

### Validate each response

The `processOutputStep()` method runs after each LLM step, allowing you to validate the response and optionally request a retry:

```typescript
import type { Processor } from '@mastra/core/processors'

export class ResponseValidator implements Processor {
  id = 'response-validator'

  async processOutputStep({ text, abort, retryCount }) {
    const isValid = await validateResponse(text)

    if (!isValid && retryCount < 3) {
      abort('Response did not meet requirements. Try again.', { retry: true })
    }

    return []
  }
}
```

For more on retry behavior, see [Retry mechanism](#retry-mechanism) in Advanced patterns.

## Built-in utility processors

Mastra provides utility processors for common tasks:

**For security and validation processors**, see the [Guardrails](https://mastra.ai/docs/agents/guardrails) page for input/output guardrails and moderation processors. **For memory-specific processors**, see the [Memory Processors](https://mastra.ai/docs/memory/memory-processors) page for processors that handle message history, semantic recall, and working memory.

### `TokenLimiter`

Prevents context window overflow by removing older messages when the total token count exceeds a specified limit. Prioritizes recent messages and preserves system messages.

```typescript
import { Agent } from '@mastra/core/agent'
import { TokenLimiter } from '@mastra/core/processors'

const agent = new Agent({
  name: 'my-agent',
  model: 'openai/gpt-5.4',
  inputProcessors: [new TokenLimiter(127000)],
})
```

See the [`TokenLimiterProcessor` reference](https://mastra.ai/reference/processors/token-limiter-processor) for custom encoding, strategy, and count mode options.

### `ToolCallFilter`

Removes tool calls and results from messages sent to the LLM, saving tokens on verbose tool interactions. Optionally exclude only specific tools. This filter only affects the LLM input, filtered messages are still saved to memory.

See the [`ToolCallFilter` reference](https://mastra.ai/reference/processors/tool-call-filter) for configuration options and the [Memory Processors](https://mastra.ai/docs/memory/memory-processors) page for pre-memory filtering.

### `ToolSearchProcessor`

Enables dynamic tool discovery for agents with large tool libraries. Instead of providing all tools upfront, the processor gives the agent `search_tools` and `load_tool` meta-tools to find and load tools by keyword on demand, reducing context token usage.

See the [`ToolSearchProcessor` reference](https://mastra.ai/reference/processors/tool-search-processor) for configuration options and usage examples.

## Advanced patterns

### Ensure a final response with `maxSteps`

When using `maxSteps` to limit agent execution, the agent may return an empty response if it attempts a tool call on the final step. Use `processInputStep()` to force a text response on the last step:

```typescript
import type {
  Processor,
  ProcessInputStepArgs,
  ProcessInputStepResult,
} from '@mastra/core/processors'

export class EnsureFinalResponseProcessor implements Processor {
  readonly id = 'ensure-final-response'

  private maxSteps: number

  constructor(maxSteps: number) {
    this.maxSteps = maxSteps
  }

  async processInputStep({
    stepNumber,
    systemMessages,
  }: ProcessInputStepArgs): Promise<ProcessInputStepResult> {
    // On the last step, prevent tool calls and instruct the LLM to summarize
    if (stepNumber === this.maxSteps - 1) {
      return {
        tools: {},
        toolChoice: 'none',
        systemMessages: [
          ...systemMessages,
          {
            role: 'system',
            content:
              'You have reached the maximum number of steps. Summarize your progress so far and provide a best-effort response. If the task is incomplete, clearly indicate what remains to be done.',
          },
        ],
      }
    }
    return {}
  }
}
```

Add it to `inputProcessors` and pass the same `maxSteps` value to `generate()` or `stream()`:

```typescript
const MAX_STEPS = 5

const agent = new Agent({
  inputProcessors: [new EnsureFinalResponseProcessor(MAX_STEPS)],
  // ...
})

await agent.generate('Your prompt', { maxSteps: MAX_STEPS })
```

### Emit custom stream events

Output processors receive a `writer` object that lets you emit custom data chunks back to the client during streaming. This is useful for use cases like streaming moderation results or sending UI update signals without blocking the original stream.

```typescript
import type { Processor } from '@mastra/core/processors'

export class ModerationProcessor implements Processor {
  id = 'moderation'

  async processOutputResult({ messages, writer }) {
    // Run moderation on the final output
    const text = messages
      .filter(m => m.role === 'assistant')
      .flatMap(m => m.content.parts?.filter(p => p.type === 'text'))
      .map(p => p.text)
      .join(' ')

    const result = await runModeration(text)

    if (result.requiresChange) {
      // Emit a custom event to the client with the moderated text
      await writer?.custom({
        type: 'data-moderation-update',
        data: {
          originalText: text,
          moderatedText: result.moderatedText,
          reason: result.reason,
        },
      })
    }

    return messages
  }
}
```

On the client, listen for the custom chunk type in the stream:

```typescript
const stream = await agent.stream('Hello')

for await (const chunk of stream.fullStream) {
  if (chunk.type === 'data-moderation-update') {
    // Update the UI with moderated text
    updateDisplayedMessage(chunk.data.moderatedText)
  }
}
```

Custom chunk types must use the `data-` prefix (e.g., `data-moderation-update`, `data-status`).

### Add metadata to messages

You can add custom metadata to messages in `processOutputResult`. This metadata is accessible via the response object:

```typescript
import type { Processor } from '@mastra/core/processors'
import type { MastraDBMessage } from '@mastra/core/memory'

export class MetadataProcessor implements Processor {
  id = 'metadata-processor'

  async processOutputResult({
    messages,
  }: {
    messages: MastraDBMessage[]
  }): Promise<MastraDBMessage[]> {
    return messages.map(msg => {
      if (msg.role === 'assistant') {
        return {
          ...msg,
          content: {
            ...msg.content,
            metadata: {
              ...msg.content.metadata,
              processedAt: new Date().toISOString(),
              customData: 'your data here',
            },
          },
        }
      }
      return msg
    })
  }
}
```

Access the metadata with `generate()`:

```typescript
const result = await agent.generate('Hello')

// The response includes uiMessages with processor-added metadata
const assistantMessage = result.response?.uiMessages?.find(m => m.role === 'assistant')
console.log(assistantMessage?.metadata?.customData)
```

For streaming, access metadata from the `finish` chunk payload or the `stream.response` promise.

### Use workflows as processors

You can use Mastra workflows as processors to create complex processing pipelines with parallel execution, conditional branching, and error handling:

```typescript
import { createWorkflow, createStep } from '@mastra/core/workflows'
import {
  ProcessorStepSchema,
  PromptInjectionDetector,
  PIIDetector,
  ModerationProcessor,
} from '@mastra/core/processors'
import { Agent } from '@mastra/core/agent'

// Create a workflow that runs multiple checks in parallel
const moderationWorkflow = createWorkflow({
  id: 'moderation-pipeline',
  inputSchema: ProcessorStepSchema,
  outputSchema: ProcessorStepSchema,
})
  .parallel([
    createStep(
      new PIIDetector({
        strategy: 'redact',
      }),
    ),
    createStep(
      new PromptInjectionDetector({
        strategy: 'block',
      }),
    ),
    createStep(
      new ModerationProcessor({
        strategy: 'block',
      }),
    ),
  ])
  .map(async ({ inputData }) => {
    return inputData['processor:pii-detector']
  })
  .commit()

// Use the workflow as an input processor
const agent = new Agent({
  id: 'moderated-agent',
  name: 'Moderated Agent',
  model: 'openai/gpt-5.4',
  inputProcessors: [moderationWorkflow],
})
```

After a `.parallel()` step, each branch result is keyed by its processor ID (e.g. `processor:pii-detector`). Use `.map()` to select the branch whose output the next step should receive.

If a branch uses a mutating strategy like `redact`, map to that branch so its transformed messages carry forward. If all branches only `block`, any branch works. Pick any one since none of them modify the messages.

When an agent is registered with Mastra, processor workflows are automatically registered as workflows, allowing you to view and debug them in the [Studio](https://mastra.ai/docs/studio/overview).

### Retry mechanism

Processors can request that the LLM retry its response with feedback. This is useful for implementing quality checks, output validation, or iterative refinement:

```typescript
import type { Processor } from '@mastra/core/processors'

export class QualityChecker implements Processor {
  id = 'quality-checker'

  async processOutputStep({ text, abort, retryCount }) {
    const qualityScore = await evaluateQuality(text)

    if (qualityScore < 0.7 && retryCount < 3) {
      // Request a retry with feedback for the LLM
      abort('Response quality score too low. Please provide a more detailed answer.', {
        retry: true,
        metadata: { score: qualityScore },
      })
    }

    return []
  }
}

const agent = new Agent({
  id: 'quality-agent',
  name: 'Quality Agent',
  model: 'openai/gpt-5.4',
  outputProcessors: [new QualityChecker()],
  maxProcessorRetries: 3, // Maximum retry attempts (default: 3)
})
```

The retry mechanism:

- Only works in `processOutputStep()` and `processInputStep()` methods
- Replays the step with the abort reason added as context for the LLM
- Tracks retry count via the `retryCount` parameter
- Respects `maxProcessorRetries` limit on the agent

## API error handling

The `processAPIError` method handles LLM API rejections — errors where the API rejects the request (such as 400 or 422 status codes) rather than network or server failures. This lets you modify the request and retry when the API rejects the message format.

```typescript
import { APICallError } from '@ai-sdk/provider'
import type { Processor, ProcessAPIErrorArgs, ProcessAPIErrorResult } from '@mastra/core/processors'

export class ContextLengthHandler implements Processor {
  id = 'context-length-handler'

  processAPIError({
    error,
    messageList,
    retryCount,
  }: ProcessAPIErrorArgs): ProcessAPIErrorResult | void {
    if (retryCount > 0) return

    if (APICallError.isInstance(error) && error.message.includes('context length exceeded')) {
      const messages = messageList.get.all.db()
      if (messages.length > 4) {
        messageList.removeByIds([messages[1]!.id, messages[2]!.id])
        return { retry: true }
      }
    }
  }
}
```

Mastra includes a built-in [`PrefillErrorHandler`](https://mastra.ai/reference/processors/prefill-error-handler) that automatically handles the Anthropic "assistant message prefill" error. This processor is auto-injected and requires no configuration.

## Related documentation

- [Guardrails](https://mastra.ai/docs/agents/guardrails): Security and validation processors
- [Memory Processors](https://mastra.ai/docs/memory/memory-processors): Memory-specific processors and automatic integration
- [Processor Interface](https://mastra.ai/reference/processors/processor-interface): Full API reference for processors
- [ToolSearchProcessor Reference](https://mastra.ai/reference/processors/tool-search-processor): API reference for dynamic tool search