# MastraModelOutput

The `MastraModelOutput` class is returned by [.stream()](https://mastra.ai/reference/streaming/agents/stream) and provides both streaming and promise-based access to model outputs. It supports structured output generation, tool calls, reasoning, and comprehensive usage tracking.

```typescript
// MastraModelOutput is returned by agent.stream()
const stream = await agent.stream('Hello world')
```

For setup and basic usage, see the [.stream()](https://mastra.ai/reference/streaming/agents/stream) method documentation.

## Streaming properties

These properties provide real-time access to model outputs as they're generated:

**fullStream** (`ReadableStream<ChunkType<OUTPUT>>`): Complete stream of all chunk types including text, tool calls, reasoning, metadata, and control chunks. Provides granular access to every aspect of the model's response.

**fullStream.ChunkType** (`ChunkType<OUTPUT>`): All possible chunk types that can be emitted during streaming

**textStream** (`ReadableStream<string>`): Stream of incremental text content only. Filters out all metadata, tool calls, and control chunks to provide just the text being generated.

**objectStream** (`ReadableStream<Partial<OUTPUT>>`): Stream of progressive structured object updates when using output schemas. Emits partial objects as they're built up, allowing real-time visualization of structured data generation.

**objectStream.PartialSchemaOutput** (`Partial<OUTPUT>`): Partially completed object matching the defined schema

**elementStream** (`ReadableStream<OUTPUT extends (infer T)[] ? T : never>`): Stream of individual array elements when the output schema defines an array type. Each element is emitted as it's completed rather than waiting for the entire array.

## Promise-based properties

These properties resolve to final values after the stream completes:

**text** (`Promise<string>`): The complete concatenated text response from the model. Resolves when text generation is finished.

**object** (`Promise<OUTPUT>`): The complete structured object response when using output schemas. Validated against the schema before resolving. Rejects if validation fails.

**object.InferSchemaOutput** (`OUTPUT`): Fully typed object matching the exact schema definition

**reasoning** (`Promise<string>`): Complete reasoning text for models that support reasoning (like OpenAI's o1 series). Returns empty string for models without reasoning capability.

**reasoningText** (`Promise<string | undefined>`): Alternative access to reasoning content. May be undefined for models that don't support reasoning, while 'reasoning' returns empty string.

**toolCalls** (`Promise<ToolCallChunk[]>`): Array of all tool call chunks made during execution. Each chunk contains tool metadata and execution details.

**toolCalls.type** (`'tool-call'`): Chunk type identifier

**toolCalls.runId** (`string`): Execution run identifier

**toolCalls.from** (`ChunkFrom`): Source of the chunk (AGENT, WORKFLOW, etc.)

**toolCalls.payload** (`ToolCallPayload`): Tool call data including toolCallId, toolName, args, and execution details

**toolResults** (`Promise<ToolResultChunk[]>`): Array of all tool result chunks corresponding to the tool calls. Contains execution results and error information.

**toolResults.type** (`'tool-result'`): Chunk type identifier

**toolResults.runId** (`string`): Execution run identifier

**toolResults.from** (`ChunkFrom`): Source of the chunk (AGENT, WORKFLOW, etc.)

**toolResults.payload** (`ToolResultPayload`): Tool result data including toolCallId, toolName, result, and error status

**usage** (`Promise<LanguageModelUsage>`): Token usage statistics including input tokens, output tokens, total tokens, and reasoning tokens (for reasoning models).

**usage.inputTokens** (`number`): Tokens consumed by the input prompt

**usage.outputTokens** (`number`): Tokens generated in the response

**usage.totalTokens** (`number`): Sum of input and output tokens

**usage.reasoningTokens** (`number`): Hidden reasoning tokens (for reasoning models)

**usage.cachedInputTokens** (`number`): Number of input tokens that were a cache hit

**finishReason** (`Promise<string | undefined>`): Reason why generation stopped (e.g., 'stop', 'length', 'tool\_calls', 'content\_filter'). Undefined if the stream hasn't finished.

**finishReason.stop** (`'stop'`): Model finished naturally

**finishReason.length** (`'length'`): Hit maximum token limit

**finishReason.tool\_calls** (`'tool_calls'`): Model called tools

**finishReason.content\_filter** (`'content_filter'`): Content was filtered

**response** (`Promise<Response>`): Response metadata and messages from the model provider.

**response.id** (`string`): Response ID from the model provider

**response.timestamp** (`Date`): Response timestamp

**response.modelId** (`string`): Model identifier used for this response

**response.headers** (`Record<string, string>`): Response headers from the model provider

**response.messages** (`ResponseMessage[]`): Response messages in model format

**response.uiMessages** (`UIMessage[]`): Response messages in UI format, includes any metadata added by output processors

## Error properties

**error** (`string | Error | { message: string; stack: string; } | undefined`): Error information if the stream encountered an error. Undefined if no errors occurred. Can be a string message, Error object, or serialized error with stack trace.

## Methods

**getFullOutput** (`() => Promise<FullOutput>`): Returns a comprehensive output object containing all results: text, structured object, tool calls, usage statistics, reasoning, and metadata. Convenient single method to access all stream results.

**getFullOutput.text** (`string`): Complete text response

**getFullOutput.object** (`OUTPUT`): Structured output if schema was provided

**getFullOutput.toolCalls** (`ToolCallChunk[]`): All tool call chunks made

**getFullOutput.toolResults** (`ToolResultChunk[]`): All tool result chunks

**getFullOutput.usage** (`Record<string, number>`): Token usage statistics

**getFullOutput.reasoning** (`string`): Reasoning text if available

**getFullOutput.finishReason** (`string`): Why generation finished

**getFullOutput.response** (`Response`): Response metadata and messages from the model provider

**consumeStream** (`(options?: ConsumeStreamOptions) => Promise<void>`): Manually consume the entire stream without processing chunks. Useful when you only need the final promise-based results and want to trigger stream consumption.

**consumeStream.onError** (`(error: Error) => void`): Callback for handling stream errors

## Usage examples

### Basic Text Streaming

```typescript
const stream = await agent.stream('Write a haiku')

// Stream text as it's generated
for await (const text of stream.textStream) {
  process.stdout.write(text)
}

// Or get the complete text
const fullText = await stream.text
console.log(fullText)
```

### Structured Output Streaming

```typescript
const stream = await agent.stream('Generate user data', {
  structuredOutput: {
    schema: z.object({
      name: z.string(),
      age: z.number(),
      email: z.string(),
    }),
  },
})

// Stream partial objects
for await (const partial of stream.objectStream) {
  console.log('Progress:', partial) // { name: "John" }, { name: "John", age: 30 }, ...
}

// Get final validated object
const user = await stream.object
console.log('Final:', user) // { name: "John", age: 30, email: "john@example.com" }
```

````text
### Tool Calls and Results

```typescript
const stream = await agent.stream("What's the weather in NYC?", {
  tools: { weather: weatherTool }
});

// Monitor tool calls
const toolCalls = await stream.toolCalls;
const toolResults = await stream.toolResults;

console.log("Tools called:", toolCalls);
console.log("Results:", toolResults);
````

### Complete Output Access

```typescript
const stream = await agent.stream('Analyze this data')

const output = await stream.getFullOutput()
console.log({
  text: output.text,
  usage: output.usage,
  reasoning: output.reasoning,
  finishReason: output.finishReason,
})
```

### Full Stream Processing

```typescript
const stream = await agent.stream('Complex task')

for await (const chunk of stream.fullStream) {
  switch (chunk.type) {
    case 'text-delta':
      process.stdout.write(chunk.payload.text)
      break
    case 'tool-call':
      console.log(`Calling ${chunk.payload.toolName}...`)
      break
    case 'reasoning-delta':
      console.log(`Reasoning: ${chunk.payload.text}`)
      break
    case 'finish':
      console.log(`Done! Reason: ${chunk.payload.stepResult.reason}`)
      // Access response messages with any metadata added by output processors
      const uiMessages = chunk.payload.response?.uiMessages
      if (uiMessages) {
        console.log('Response messages:', uiMessages)
      }
      break
  }
}
```

### Error handling

```typescript
const stream = await agent.stream('Analyze this data')

try {
  // Option 1: Handle errors in consumeStream
  await stream.consumeStream({
    onError: error => {
      console.error('Stream error:', error)
    },
  })

  const result = await stream.text
} catch (error) {
  console.error('Failed to get result:', error)
}

// Option 2: Check error property
const result = await stream.getFullOutput()
if (stream.error) {
  console.error('Stream had errors:', stream.error)
}
```

## Related types

- [.stream()](https://mastra.ai/reference/streaming/agents/stream): Method that returns MastraModelOutput
- [ChunkType](https://mastra.ai/reference/streaming/ChunkType): All possible chunk types in the full stream