# Voice events

Voice providers emit various events during real-time voice interactions. These events can be listened to using the [voice.on()](https://mastra.ai/reference/voice/voice.on) method and are particularly important for building interactive voice applications.

## Common events

These events are commonly implemented across real-time voice providers:

**error** (`Error`): Emitted when an error occurs during voice processing or when audio data format is unsupported

**session.created** (`object`): Emitted when a new session is created with the OpenAI service

**session.updated** (`object`): Emitted when the session configuration is updated

**response.created** (`object`): Emitted when a new response is created by the AI assistant

**response.done** (`object`): Emitted when the AI assistant has completed its response

**speaker** (`StreamWithId`): Emitted with a new audio stream that can be piped to an audio output

**writing** (`object`): Emitted when text is being transcribed (user) or generated (assistant)

**speaking** (`object`): Emitted when audio data is available from the voice provider

**speaking.done** (`object`): Emitted when the voice provider has finished speaking

**tool-call-start** (`object`): Emitted when the AI assistant starts executing a tool

**tool-call-result** (`object`): Emitted when a tool execution is complete with its result

## Notes

- Not all events are supported by all voice providers
- The exact payload structure may vary between providers
- For non-real-time providers, most of these events won't be emitted
- Events are useful for building interactive UIs that respond to the conversation state
- Consider using the [voice.off()](https://mastra.ai/reference/voice/voice.off) method to remove event listeners when they're no longer needed