# Agent.network()

The `.network()` method enables multi-agent collaboration and routing. This method accepts messages and optional execution options.

> **Deprecated:** The `.network()` primitive has been deprecated and will be removed in a future major release. Use [supervisor agents](https://mastra.ai/docs/agents/supervisor-agents) with `agent.stream()` or `agent.generate()` instead. See the [migration guide](https://mastra.ai/guides/migrations/network-to-supervisor) to upgrade.

## Usage example

```typescript
import { Agent } from '@mastra/core/agent'
import { agent1, agent2 } from './agents'
import { workflow1 } from './workflows'
import { tool1, tool2 } from './tools'

const agent = new Agent({
  id: 'network-agent',
  name: 'Network Agent',
  instructions: 'You are a network agent that can help users with a variety of tasks.',
  model: 'openai/gpt-5.4',
  agents: {
    agent1,
    agent2,
  },
  workflows: {
    workflow1,
  },
  tools: {
    tool1,
    tool2,
  },
})

await agent.network(`
  Find me the weather in Tokyo.
  Based on the weather, plan an activity for me.
`)
```

## Parameters

**messages** (`string | string[] | CoreMessage[] | AiMessageType[] | UIMessageWithMetadata[]`): The messages to send to the agent. Can be a single string, array of strings, or structured message objects.

**options** (`MultiPrimitiveExecutionOptions`): Optional configuration for the network process.

**options.maxSteps** (`number`): Maximum number of steps to run during execution.

**options.abortSignal** (`AbortSignal`): Signal to abort the network execution. When aborted, the network stops routing, cancels any in-progress sub-agent, tool, or workflow execution, and skips saving partial results to memory.

**options.onAbort** (`(event: { primitiveType: string; primitiveId: string; iteration: number }) => void | Promise<void>`): Callback fired when the network is aborted. Receives an event with the type and ID of the primitive that was executing when the abort occurred.

**options.memory** (`object`): Configuration for memory. This is the preferred way to manage memory.

**options.memory.thread** (`string | { id: string; metadata?: Record<string, any>, title?: string }`): The conversation thread, as a string ID or an object with an \`id\` and optional \`metadata\`.

**options.memory.resource** (`string`): Identifier for the user or resource associated with the thread.

**options.memory.options** (`MemoryConfig`): Configuration for memory behavior, like message history and semantic recall.

**options.tracingContext** (`TracingContext`): Tracing context for creating child spans and adding metadata. Automatically injected when using Mastra's tracing system.

**options.tracingContext.currentSpan** (`Span`): Current span for creating child spans and adding metadata. Use this to create custom child spans or update span attributes during execution.

**options.tracingOptions** (`TracingOptions`): Options for Tracing configuration.

**options.tracingOptions.metadata** (`Record<string, any>`): Metadata to add to the root trace span. Useful for adding custom attributes like user IDs, session IDs, or feature flags.

**options.tracingOptions.requestContextKeys** (`string[]`): Additional RequestContext keys to extract as metadata for this trace. Supports dot notation for nested values (e.g., 'user.id').

**options.tracingOptions.traceId** (`string`): Trace ID to use for this execution (1-32 hexadecimal characters). If provided, this trace will be part of the specified trace.

**options.tracingOptions.parentSpanId** (`string`): Parent span ID to use for this execution (1-16 hexadecimal characters). If provided, the root span will be created as a child of this span.

**options.tracingOptions.tags** (`string[]`): Tags to apply to this trace. String labels for categorizing and filtering traces.

**options.telemetry** (`TelemetrySettings`): Settings for OTLP telemetry collection during streaming (not Tracing).

**options.telemetry.isEnabled** (`boolean`): Enable or disable telemetry. Disabled by default while experimental.

**options.telemetry.recordInputs** (`boolean`): Enable or disable input recording. Enabled by default. You might want to disable input recording to avoid recording sensitive information.

**options.telemetry.recordOutputs** (`boolean`): Enable or disable output recording. Enabled by default. You might want to disable output recording to avoid recording sensitive information.

**options.telemetry.functionId** (`string`): Identifier for this function. Used to group telemetry data by function.

**options.modelSettings** (`CallSettings`): Model-specific settings like temperature, maxOutputTokens, topP, etc. These settings control how the language model generates responses.

**options.modelSettings.temperature** (`number`): Controls randomness in generation (0-2). Higher values make output more random.

**options.modelSettings.maxOutputTokens** (`number`): Maximum number of tokens to generate in the response. Note: Use maxOutputTokens (not maxTokens) as per AI SDK v5 convention.

**options.modelSettings.maxRetries** (`number`): Maximum number of retry attempts for failed requests.

**options.modelSettings.topP** (`number`): Nucleus sampling parameter (0-1). Controls diversity of generated text.

**options.modelSettings.topK** (`number`): Top-k sampling parameter. Limits vocabulary to k most likely tokens.

**options.modelSettings.presencePenalty** (`number`): Penalty for token presence (-2 to 2). Reduces repetition.

**options.modelSettings.frequencyPenalty** (`number`): Penalty for token frequency (-2 to 2). Reduces repetition of frequent tokens.

**options.modelSettings.stopSequences** (`string[]`): Stop sequences. If set, the model will stop generating text when one of the stop sequences is generated.

**options.structuredOutput** (`StructuredOutputOptions`): Configuration for generating a typed structured output from the network result.

**options.structuredOutput.schema** (`ZodSchema | JSONSchema7`): The schema to validate the output against. Can be a Zod schema or JSON Schema.

**options.structuredOutput.model** (`MastraModelConfig`): Model to use for generating the structured output. Defaults to the agent's model.

**options.structuredOutput.instructions** (`string`): Custom instructions for generating the structured output.

**options.runId** (`string`): Unique ID for this generation run. Useful for tracking and debugging purposes.

**options.requestContext** (`RequestContext`): Request Context for dependency injection and contextual information.

**options.traceId** (`string`): The trace ID associated with this execution when Tracing is enabled. Use this to correlate logs and debug execution flow.

**options.spanId** (`string`): The root span ID associated with this execution when Tracing is enabled. Use this for span-level lookup and correlation.

**options.onStepFinish** (`(event: any) => Promise<void> | void`): Callback fired after each LLM step within a sub-agent execution. Receives step details including finish reason and token usage.

**options.onError** (`({ error }: { error: Error | string }) => Promise<void> | void`): Callback fired when an error occurs during sub-agent execution.

## Returns

**stream** (`MastraAgentNetworkStream<NetworkChunkType>`): A custom stream that extends ReadableStream\<NetworkChunkType> with additional network-specific properties

**status** (`Promise<RunStatus>`): A promise that resolves to the current workflow run status

**result** (`Promise<WorkflowResult<TState, TOutput, TSteps>>`): A promise that resolves to the final workflow result

**usage** (`Promise<{ promptTokens: number; completionTokens: number; totalTokens: number }>`): A promise that resolves to token usage statistics

**object** (`Promise<OUTPUT | undefined>`): A promise that resolves to the structured output object. Only available when structuredOutput option is provided. Resolves to undefined if no schema was specified.

**objectStream** (`ReadableStream<Partial<OUTPUT>>`): A stream of partial objects during structured output generation. Useful for streaming partial results as they're being generated.

## Structured output

When you need typed, validated results from your network, use the `structuredOutput` option. The network will generate a response matching your schema after task completion.

```typescript
import { z } from 'zod'

const resultSchema = z.object({
  summary: z.string().describe('A brief summary of the findings'),
  recommendations: z.array(z.string()).describe('List of recommendations'),
  confidence: z.number().min(0).max(1).describe('Confidence score'),
})

const stream = await agent.network('Research AI trends and summarize', {
  structuredOutput: {
    schema: resultSchema,
  },
})

// Consume the stream
for await (const chunk of stream) {
  // Handle streaming events
}

// Get the typed result
const result = await stream.object
// result is typed as { summary: string; recommendations: string[]; confidence: number }
console.log(result?.summary)
console.log(result?.recommendations)
```

### Streaming Partial Objects

You can also stream partial objects as they're being generated:

```typescript
const stream = await agent.network('Analyze data', {
  structuredOutput: { schema: resultSchema },
})

// Stream partial objects
for await (const partial of stream.objectStream) {
  console.log('Partial result:', partial)
}

// Get final result
const final = await stream.object
```

### Chunk Types

When using structured output, additional chunk types are emitted:

- `network-object`: Emitted with partial objects during streaming
- `network-object-result`: Emitted with the final structured object

## Aborting a network

Use `abortSignal` to cancel a running network. When aborted, the network stops routing, cancels any in-progress sub-agent, tool, or workflow execution, and doesn't save partial results to memory.

```typescript
const controller = new AbortController()

// Abort after 30 seconds
setTimeout(() => controller.abort(), 30_000)

const stream = await agent.network('Research this topic thoroughly', {
  abortSignal: controller.signal,
  onAbort: ({ primitiveType, primitiveId, iteration }) => {
    console.log(`Aborted ${primitiveType} "${primitiveId}" at iteration ${iteration}`)
  },
})

for await (const chunk of stream) {
  if (
    chunk.type === 'routing-agent-abort' ||
    chunk.type === 'agent-execution-abort' ||
    chunk.type === 'tool-execution-abort' ||
    chunk.type === 'workflow-execution-abort'
  ) {
    console.log('Network was aborted')
  }
}
```