IDataRecognitionClient Interface |
[Missing <summary> documentation for "T:Microsoft.CognitiveServices.SpeechRecognition.IDataRecognitionClient"]
Namespace: Microsoft.CognitiveServices.SpeechRecognition
public interface IDataRecognitionClient : IConversation
The IDataRecognitionClient type exposes the following members.
Name | Description | |
---|---|---|
AudioStart |
The microphone is turned on and data from the microphone is sent to the Speech Recognition Service.
A built in Silence Detector is applied to the microphone data before it is sent to the recognition service.
(Inherited from IConversation.) | |
AudioStop |
The microphone is turned off and the connection to the Speech Recognition Server is severed.
(Inherited from IConversation.) | |
CreateAudioStream |
Gets an audio stream with no format specified. This method is used
when the format information can be detected within the stream itself.
(Inherited from IConversation.) | |
CreateAudioStream(SpeechAudioFormat) |
Gets an audio stream with the specified format. Use this method
when sending raw audio samples.
(Inherited from IConversation.) | |
EndAudio |
Notify the server that client is done sending audio buffers to the Speech Recognition Service.
This work is queued onto a background worker.
| |
SendAudio |
Send buffers of audio to the Speech Recognition Service.Note for wave files, you can just send
data from the file right to the server. The audio must be PCM, mono, 16-bit sample, with
sample rate of 8000 Hz or 16000 Hz. In the case you are not an audio file in wave format,
and instead you have just raw data(for example audio coming over bluetooth), then before
sending up any audio data with this method, you must first send up an SpeechAudioFormat descriptor
to describe the layout and format of your raw audio data via DataRecognitionClient's sendAudioFormat()
method. This work is queued onto a background worker.
| |
SendAudioFormat |
If you are not sending up a audio file in wave format, and instead you have just
raw data, then before sending up any audio data, you must first send up an
SpeechAudioFormat descriptor to describe the layout and format of your raw audio data.
The audio must be PCM, mono, 16-bit sample, with sample rate of 8000 Hz or 16000 Hz.
|
Name | Description | |
---|---|---|
AuthenticationUri |
Represents the authentication service endpoint
(Inherited from IConversation.) |
Name | Description | |
---|---|---|
OnConversationError |
Event fired when a conversation error occurs
(Inherited from IConversation.) | |
OnIntent |
Event fired when a Speech Recognition has finished, the recognized text has
been parsed with LUIS for intent and entities, and the structured JSON result is available.
(Inherited from IConversation.) | |
OnMicrophoneStatus |
Event fired when the microphone recording status has changed.
(Inherited from IConversation.) | |
OnPartialResponseReceived |
Event fired when a partial response is received
(Inherited from IConversation.) | |
OnResponseReceived |
Event fired when a response is received
(Inherited from IConversation.) |