SpeechAudioFormat Class |
Namespace: Microsoft.CognitiveServices.SpeechRecognition
public class SpeechAudioFormat
The SpeechAudioFormat type exposes the following members.
Name | Description | |
---|---|---|
SpeechAudioFormat |
The SpeechAudioFormat contains information about how the audio data was recorded and stored
including the type of compression used, number of channels,
sample rate, bits per sample and other attributes.
|
Name | Description | |
---|---|---|
create16BitPCMFormat |
Produces a SpeechAudioFormat for 16-bit PCM data.
| |
createSiren7Format |
Produces a SpeechAudioFormat for data encoded in Siren7. The data must be encoded in mono,
such that a 320 sample mono input frame produces a 40 bytes output frame.
| |
Equals | (Inherited from Object.) | |
Finalize | (Inherited from Object.) | |
GetHashCode | (Inherited from Object.) | |
GetType | (Inherited from Object.) | |
MemberwiseClone | (Inherited from Object.) | |
ToString | (Inherited from Object.) |
Name | Description | |
---|---|---|
AverageBytesPerSecond |
This value indicates how many bytes of audio data must be streamed to a D/A converter
per second in order to play the audio.
| |
BitsPerSample |
The number of significant bits in each audio sample. Usually 16 or 24.
| |
BlockAlign |
The number of data bytes per sample slice.
| |
ChannelCount |
The number of separate audio signals in the audio data.
A value of 1 means a mono signal, a value of 2 means a stereo signal, etc.
| |
EncodingFormat |
Specifies the type of compression used on the audio data as a short. Default value is 1 (which means PCM)
Use getEncodingFormat or setEncodingFormat to use this short as a AudioCompressionType.
| |
FormatSpecificData |
Extra bytes for use to describe parameters to certain audio compression types.
This field should be empty for PCM.
| |
SamplesPerSecond |
Audio sample slices per second (one slice includes all the channel samples).
Thus this value is unaffected by the number of channels.
|