SpeechAudioFormat Fields |
The SpeechAudioFormat type exposes the following members.
Name | Description | |
---|---|---|
AverageBytesPerSecond |
This value indicates how many bytes of audio data must be streamed to a D/A converter
per second in order to play the audio.
| |
BitsPerSample |
The number of significant bits in each audio sample. Usually 16 or 24.
| |
BlockAlign |
The number of data bytes per sample slice.
| |
ChannelCount |
The number of separate audio signals in the audio data.
A value of 1 means a mono signal, a value of 2 means a stereo signal, etc.
| |
EncodingFormat |
Specifies the type of compression used on the audio data as a short. Default value is 1 (which means PCM)
Use getEncodingFormat or setEncodingFormat to use this short as a AudioCompressionType.
| |
FormatSpecificData |
Extra bytes for use to describe parameters to certain audio compression types.
This field should be empty for PCM.
| |
SamplesPerSecond |
Audio sample slices per second (one slice includes all the channel samples).
Thus this value is unaffected by the number of channels.
|