# Agent.getLLM()

The `.getLLM()` method retrieves the language model instance configured for an agent, resolving it if it's a function. You can also pass a request-scoped `model` override without mutating the agent's configured model.

## Usage example

```typescript
await agent.getLLM()
```

```typescript
await agent.getLLM({
  model: 'openai/gpt-5.4',
})
```

## Parameters

**options** (`{ requestContext?: RequestContext; model?: MastraModelConfig | DynamicArgument<MastraModelConfig> }`): Optional configuration object containing request context and an optional request-scoped model override. (Default: `{}`)

**options.requestContext** (`RequestContext`): Request Context for dependency injection and contextual information.

**options.model** (`MastraModelConfig | DynamicArgument<MastraModelConfig>`): Optional request-scoped model override. The agent's configured model is not mutated.

## Returns

**llm** (`MastraLLMV1 | Promise<MastraLLMV1>`): The language model instance configured for the agent, either as a direct instance or a promise that resolves to the LLM.

## Extended usage example

```typescript
await agent.getLLM({
  requestContext: new RequestContext(),
  model: 'openai/gpt-5.4',
})
```

## Related

- [Agents overview](https://mastra.ai/docs/agents/overview)
- [Request Context](https://mastra.ai/docs/server/request-context)