# voice.connect()

The `connect()` method establishes a WebSocket or WebRTC connection for real-time speech-to-speech communication. This method must be called before using other real-time features like `send()` or `answer()`.

## Usage example

```typescript
import { OpenAIRealtimeVoice } from '@mastra/voice-openai-realtime'
import Speaker from '@mastra/node-speaker'

const speaker = new Speaker({
  sampleRate: 24100, // Audio sample rate in Hz - standard for high-quality audio on MacBook Pro
  channels: 1, // Mono audio output (as opposed to stereo which would be 2)
  bitDepth: 16, // Bit depth for audio quality - CD quality standard (16-bit resolution)
})

// Initialize a real-time voice provider
const voice = new OpenAIRealtimeVoice({
  realtimeConfig: {
    model: 'gpt-5.1-realtime',
    apiKey: process.env.OPENAI_API_KEY,
    options: {
      sessionConfig: {
        turn_detection: {
          type: 'server_vad',
          threshold: 0.6,
          silence_duration_ms: 1200,
        },
      },
    },
  },
  speaker: 'alloy', // Default voice
})
// Connect to the real-time service
await voice.connect()
// Now you can use real-time features
voice.on('speaker', stream => {
  stream.pipe(speaker)
})
// With connection options
await voice.connect({
  timeout: 10000, // 10 seconds timeout
  reconnect: true,
})
```

## Parameters

**options** (`Record<string, unknown>`): Provider-specific connection options

## Return value

Returns a `Promise<void>` that resolves when the connection is successfully established.

## Provider-specific options

Each real-time voice provider may support different options for the `connect()` method:

### OpenAI Realtime

**options** (`Options`): Configuration options.

**options.timeout** (`number`): Connection timeout in milliseconds

**options.reconnect** (`boolean`): Whether to automatically reconnect on connection loss

## Using with `CompositeVoice`

When using `CompositeVoice`, the `connect()` method delegates to the configured real-time provider:

```typescript
import { CompositeVoice } from '@mastra/core/voice'
import { OpenAIRealtimeVoice } from '@mastra/voice-openai-realtime'
const realtimeVoice = new OpenAIRealtimeVoice()
const voice = new CompositeVoice({
  realtime: realtimeVoice,
})
// This will use the OpenAIRealtimeVoice provider
await voice.connect()
```

## Notes

- This method is only implemented by real-time voice providers that support speech-to-speech capabilities
- If called on a voice provider that doesn't support this functionality, it will log a warning and resolve immediately
- The connection must be established before using other real-time methods like `send()` or `answer()`
- When you're done with the voice instance, call `close()` to properly clean up resources
- Some providers may automatically reconnect on connection loss, depending on their implementation
- Connection errors will typically be thrown as exceptions that should be caught and handled

## Related methods

- [voice.send()](https://mastra.ai/reference/voice/voice.send): Sends audio data to the voice provider
- [voice.answer()](https://mastra.ai/reference/voice/voice.answer): Triggers the voice provider to respond
- [voice.close()](https://mastra.ai/reference/voice/voice.close): Disconnects from the real-time service
- [voice.on()](https://mastra.ai/reference/voice/voice.on): Registers an event listener for voice events