SpeechRecognitionServiceFactory Class Reference

Inherits from NSObject
Declared in SpeechRecognitionService.h
SpeechRecognitionServiceFactory.mm

Overview

Factory for creating clients for Azure Intelligent Services speech recognition. This factory can be used to create a client that interacts with the speech recognition service. There are four types of clients this factory can create.

DataRecognitionClient This client is optimal for applications that require speech recognition with previously acquired data, for example from a file or Bluetooth audio source.

Data is broken up into buffers and each buffer is sent to the speech recognition service. No modification is done to the buffers; if silence detection is required, it must be performed in an external pre-processing pass over the data. Audio data must be PCM, mono, 16-bit sample, with sample rate of 16000 Hz. Returns speech recognition results.

DataRecognitionClientWithIntent This client is optimal for applications that require speech recognition and intent detection with previously acquired data, for example from a file or Bluetooth audio source.

Data is broken up into buffers and each buffer is sent to the speech recognition service. No modification is done to the buffers; if silence detection is required, it must be performed in an external pre-processing pass over the data. Audio data must be PCM, mono, 16-bit sample, with sample rate of 16000 Hz. Returns speech recognition results and structured intent results. Intent results are returned in structured JSON form (see https://LUIS.ai)

MicrophoneRecognitionClient This client is optimal for applications that require for speech recognition from microphone input.

When the microphone is turned on, audio data from the microphone is streamed to the speech recognition service. A built in Silence Detector is applied to the microphone data before it is sent to the recognition service. Returns speech recognition results.

MicrophoneRecognitionClientWithIntent This client is optimal for applications that require for speech recognition and intent detection from microphone input.

When the microphone is turned on, audio data from the microphone is streamed to the speech recognition service. A built in Silence Detector is applied to the microphone data before it is sent to the recognition service. Returns speech recognition and intent results. Intent results are returned in structured JSON form (see https://LUIS.ai).

+ getAPIVersion

Gets the API version

+ (NSString *)getAPIVersion

Return Value

The version of the of the API you are currently using

Discussion

Gets the API version

Declared In

SpeechRecognitionServiceFactory.mm

+ createPrefs:withLanguage:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withUrl:

Allocates and initializes a preferences object based on the specified recognition mode.

+ (AdmRecoOnlyPreferences *)createPrefs:(SpeechRecognitionMode)speechRecognitionMode withLanguage:(NSString *)language withPrimaryKey:(NSString *)primaryKey withSecondaryKey:(NSString *)secondaryKey withLUISAppID:(NSString *)luisAppID withLUISSecret:(NSString *)luisSubscriptionID withUrl:(NSString *)url

Parameters

speechRecognitionMode

The speech recognition mode.

In Short Phrase mode, the client receives one final multiple N-best choice result.

In Long-form Dictation mode, the client receives multiple final results, based on where the service thinks sentence pauses are.

language

The language of the speech being recognized. The supported languages are:

  • en-us: American English

  • en-gb: British English

  • de-de: German

  • es-es: Spanish

  • fr-fr: French

  • it-it: Italian

  • zh-cn: Mandarin Chinese

primaryKey

The primary key. It’s a best practice that the application rotate keys periodically. Between rotations, you would disable the primary key, making the secondary key the default, giving you time to swap out the primary.

secondaryKey

The secondary key. Intended to be used when the primary key has been disabled.

luisAppID

Once you have configured the LUIS service to create and publish an intent model (see https://LUIS.ai) you will be given an Application ID GUID. Use that GUID here.

luisSubscriptionID

Once you create a LUIS account (see https://LUIS.ai) you will be given an Subscription ID. Use that secret here.

url

The endpoint with a Acoustic Model that you specially created with the Acoustic Model Specialization Service.

Return Value

The preferences object

Discussion

Allocates and initializes a preferences object based on the specified recognition mode.

Declared In

SpeechRecognitionServiceFactory.mm

+ createDataClient:withLanguage:withKey:withProtocol:

Creates a DataRecognitionClient for speech recognition with acquired data, for example from a file or Bluetooth audio source.

+ (DataRecognitionClient *)createDataClient:(SpeechRecognitionMode)speechRecognitionMode withLanguage:(NSString *)language withKey:(NSString *)primaryOrSecondaryKey withProtocol:(id<SpeechRecognitionProtocol>)delegate

Parameters

speechRecognitionMode

The speech recognition mode.

In Short Phrase mode, the client receives one final multiple N-best choice result.

In Long-form Dictation mode, the client receives multiple final results, based on where the service thinks sentence pauses are.

language

The language of the speech being recognized. The supported languages are:

  • en-us: American English

  • en-gb: British English

  • de-de: German

  • es-es: Spanish

  • fr-fr: French

  • it-it: Italian

  • zh-cn: Mandarin Chinese

primaryOrSecondaryKey

The primary or the secondary key.

You should periodically renew your key to prevent unauthorized use of your subscription. The recommended approach is to acquire two keys, a primary and a secondary and to rotate key usage between these two keys. While one key is disabled, the other key will still work, allowing your application to remain active while the disabled key is replaced.

delegate

The protocol used to perform the callbacks/events upon during speech recognition.

Return Value

The created DataRecognitionClient.

Discussion

Creates a DataRecognitionClient for speech recognition with acquired data, for example from a file or Bluetooth audio source.

Data is broken up into buffers and each buffer is sent to the speech recognition service. No modification is done to the buffers; if silence detection is required, it must be performed in an external pre-processing pass over the data. Audio data must be PCM, mono, 16-bit sample, with sample rate of 16000 Hz.

The recognition service returns only speech recognition results and does not perform intent detection.

Declared In

SpeechRecognitionServiceFactory.mm

+ createDataClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:

Creates a DataRecognitionClient for speech recognition with acquired data, for example from a file or Bluetooth audio source.

+ (DataRecognitionClient *)createDataClient:(SpeechRecognitionMode)speechRecognitionMode withLanguage:(NSString *)language withPrimaryKey:(NSString *)primaryKey withSecondaryKey:(NSString *)secondaryKey withProtocol:(id<SpeechRecognitionProtocol>)delegate

Parameters

speechRecognitionMode

The speech recognition mode.

In Short Phrase mode, the client receives one final multiple N-best choice result.

In Long-form Dictation mode, the client receives multiple final results, based on where the service thinks sentence pauses are.

language

The language of the speech being recognized. The supported languages are:

  • en-us: American English

  • en-gb: British English

  • de-de: German

  • es-es: Spanish

  • fr-fr: French

  • it-it: Italian

  • zh-cn: Mandarin Chinese

primaryKey

The primary key. It’s a best practice that the application rotate keys periodically. Between rotations, you would disable the primary key, making the secondary key the default, giving you time to swap out the primary.

secondaryKey

The secondary key. Intended to be used when the primary key has been disabled.

You should periodically renew your key to prevent unauthorized use of your subscription. The recommended approach is to acquire two keys, a primary and a secondary and to rotate key usage between these two keys. While one key is disabled, the other key will still work, allowing your application to remain active while the disabled key is replaced.

delegate

The protocol used to perform the callbacks/events upon during speech recognition.

Return Value

The created DataRecognitionClient.

Discussion

Creates a DataRecognitionClient for speech recognition with acquired data, for example from a file or Bluetooth audio source.

Data is broken up into buffers and each buffer is sent to the speech recognition service. No modification is done to the buffers; if silence detection is required, it must be performed in an external pre-processing pass over the data. Audio data must be PCM, mono, 16-bit sample, with sample rate of 16000 Hz.

The recognition service returns only speech recognition results and does not perform intent detection.

Declared In

SpeechRecognitionServiceFactory.mm

+ createDataClient:withLanguage:withKey:withProtocol:withUrl:

Creates a DataRecognitionClient with Acoustic Model Adaptation for speech recognition with acquired data, for example from a file or Bluetooth audio source.

+ (DataRecognitionClient *)createDataClient:(SpeechRecognitionMode)speechRecognitionMode withLanguage:(NSString *)language withKey:(NSString *)primaryOrSecondaryKey withProtocol:(id<SpeechRecognitionProtocol>)delegate withUrl:(NSString *)url

Parameters

speechRecognitionMode

The speech recognition mode.

In Short Phrase mode, the client receives one final multiple N-best choice result. In Long-form Dictation mode, the client receives multiple final results, based on where the service thinks sentence pauses are.

language

The language of the speech being recognized. The supported languages are:

  • en-us: American English

  • en-gb: British English

  • de-de: German

  • es-es: Spanish

  • fr-fr: French

  • it-it: Italian

  • zh-cn: Mandarin Chinese

primaryOrSecondaryKey

The primary or the secondary key.

You should periodically renew your key to prevent unauthorized use of your subscription. The recommended approach is to acquire two keys, a primary and a secondary and to rotate key usage between these two keys. While one key is disabled, the other key will still work, allowing your application to remain active while the disabled key is replaced.

delegate

The protocol used to perform the callbacks/events upon during speech recognition.

url

The endpoint with a Acoustic Model that you specially created with the Acoustic Model Specialization Service.

Return Value

The created DataRecognitionClient.

Discussion

Creates a DataRecognitionClient with Acoustic Model Adaptation for speech recognition with acquired data, for example from a file or Bluetooth audio source.

Data is broken up into buffers and each buffer is sent to the speech recognition service. No modification is done to the buffers; if silence detection is required, it must be performed in an external pre-processing pass over the data. Audio data must be PCM, mono, 16-bit sample, with sample rate of 16000 Hz.

The recognition service returns only speech recognition results and does not perform intent detection.

Declared In

SpeechRecognitionServiceFactory.mm

+ createDataClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:withUrl:

Creates a DataRecognitionClient with Acoustic Model Adaptation for speech recognition with acquired data, for example from a file or Bluetooth audio source.

+ (DataRecognitionClient *)createDataClient:(SpeechRecognitionMode)speechRecognitionMode withLanguage:(NSString *)language withPrimaryKey:(NSString *)primaryKey withSecondaryKey:(NSString *)secondaryKey withProtocol:(id<SpeechRecognitionProtocol>)delegate withUrl:(NSString *)url

Parameters

speechRecognitionMode

The speech recognition mode.

In Short Phrase mode, the client receives one final multiple N-best choice result. In Long-form Dictation mode, the client receives multiple final results, based on where the service thinks sentence pauses are.

language

The language of the speech being recognized. The supported languages are:

  • en-us: American English

  • en-gb: British English

  • de-de: German

  • es-es: Spanish

  • fr-fr: French

  • it-it: Italian

  • zh-cn: Mandarin Chinese

primaryKey

The primary key. It’s a best practice that the application rotate keys periodically. Between rotations, you would disable the primary key, making the secondary key the default, giving you time to swap out the primary.

secondaryKey

The secondary key. Intended to be used when the primary key has been disabled.

delegate

The protocol used to perform the callbacks/events upon during speech recognition.

url

The endpoint with a Acoustic Model that you specially created with the Acoustic Model Specialization Service.

Return Value

The created DataRecognitionClient.

Discussion

Creates a DataRecognitionClient with Acoustic Model Adaptation for speech recognition with acquired data, for example from a file or Bluetooth audio source.

Data is broken up into buffers and each buffer is sent to the speech recognition service. No modification is done to the buffers; if silence detection is required, it must be performed in an external pre-processing pass over the data. Audio data must be PCM, mono, 16-bit sample, with sample rate of 16000 Hz.

The recognition service returns only speech recognition results and does not perform intent detection.

Declared In

SpeechRecognitionServiceFactory.mm

+ createDataClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:

Creates a DataRecognitionClientWithIntent for speech recognition and intent detection with previously acquired data, for example from a file or Bluetooth audio source.

+ (DataRecognitionClientWithIntent *)createDataClientWithIntent:(NSString *)language withKey:(NSString *)primaryOrSecondaryKey withLUISAppID:(NSString *)luisAppID withLUISSecret:(NSString *)luisSubscriptionID withProtocol:(id<SpeechRecognitionProtocol>)delegate

Parameters

language

The language of the speech being recognized. The supported languages are:

  • en-us: American English

  • en-gb: British English

  • de-de: German

  • es-es: Spanish

  • fr-fr: French

  • it-it: Italian

  • zh-cn: Mandarin Chinese

primaryOrSecondaryKey

The primary or the secondary key.

You should periodically renew your key to prevent unauthorized use of your subscription. The recommended approach is to acquire two keys, a primary and a secondary and to rotate key usage between these two keys. While one key is disabled, the other key will still work, allowing your application to remain active while the disabled key is replaced.

luisAppID

Once you have configured the LUIS service to create and publish an intent model (see https://LUIS.ai) you will be given an Application ID GUID. Use that GUID here.

luisSubscriptionID

Once you create a LUIS account (see https://LUIS.ai) you will be given an Subscription ID. Use that secret here.

delegate

The protocol used to perform the callbacks/events during speech recognition and intent detection.

Return Value

The created DataRecognitionClientWithIntent.

Discussion

Creates a DataRecognitionClientWithIntent for speech recognition and intent detection with previously acquired data, for example from a file or Bluetooth audio source.

Data is broken up into buffers and each buffer is sent to the speech recognition service.

No modification is done to the buffers; if silence detection is required, it must be performed in an external pre-processing pass over the data.

Audio data must be PCM, mono, 16-bit sample, with sample rate of 16000 Hz.

Returns speech recognition results and structured intent results. Intent results are returned in structured JSON form (see https://LUIS.ai)

Declared In

SpeechRecognitionServiceFactory.mm

+ createDataClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:

Creates a DataRecognitionClientWithIntent for speech recognition and intent detection with previously acquired data, for example from a file or Bluetooth audio source.

+ (DataRecognitionClientWithIntent *)createDataClientWithIntent:(NSString *)language withPrimaryKey:(NSString *)primaryKey withSecondaryKey:(NSString *)secondaryKey withLUISAppID:(NSString *)luisAppID withLUISSecret:(NSString *)luisSubscriptionID withProtocol:(id<SpeechRecognitionProtocol>)delegate

Parameters

language

The language of the speech being recognized. The supported languages are:

  • en-us: American English

  • en-gb: British English

  • de-de: German

  • es-es: Spanish

  • fr-fr: French

  • it-it: Italian

  • zh-cn: Mandarin Chinese

primaryKey

The primary key. It’s a best practice that the application rotate keys periodically. Between rotations, you would disable the primary key, making the secondary key the default, giving you time to swap out the primary.

secondaryKey

The secondary key. Intended to be used when the primary key has been disabled.

luisAppID

Once you have configured the LUIS service to create and publish an intent model (see https://LUIS.ai) you will be given an Application ID GUID. Use that GUID here.

luisSubscriptionID

Once you create a LUIS account (see https://LUIS.ai) you will be given an Subscription ID. Use that secret here.

delegate

The protocol used to perform the callbacks/events during speech recognition and intent detection.

Return Value

The created DataRecognitionClientWithIntent.

Discussion

Creates a DataRecognitionClientWithIntent for speech recognition and intent detection with previously acquired data, for example from a file or Bluetooth audio source.

Data is broken up into buffers and each buffer is sent to the speech recognition service.

No modification is done to the buffers; if silence detection is required, it must be performed in an external pre-processing pass over the data.

Audio data must be PCM, mono, 16-bit sample, with sample rate of 16000 Hz.

Returns speech recognition results and structured intent results. Intent results are returned in structured JSON form (see https://LUIS.ai)

Declared In

SpeechRecognitionServiceFactory.mm

+ createDataClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:

Creates a DataRecognitionClientWithIntent with Acoustic Model Adaptation for speech recognition and intent detection with previously acquired data, for example from a file or Bluetooth audio source.

+ (DataRecognitionClientWithIntent *)createDataClientWithIntent:(NSString *)language withKey:(NSString *)primaryOrSecondaryKey withLUISAppID:(NSString *)luisAppID withLUISSecret:(NSString *)luisSubscriptionID withProtocol:(id<SpeechRecognitionProtocol>)delegate withUrl:(NSString *)url

Parameters

language

The language of the speech being recognized. The supported languages are:

  • en-us: American English

  • en-gb: British English

  • de-de: German

  • es-es: Spanish

  • fr-fr: French

  • it-it: Italian

  • zh-cn: Mandarin Chinese

primaryOrSecondaryKey

The primary or the secondary key.

You should periodically renew your key to prevent unauthorized use of your subscription. The recommended approach is to acquire two keys, a primary and a secondary and to rotate key usage between these two keys. While one key is disabled, the other key will still work, allowing your application to remain active while the disabled key is replaced.

luisAppID

Once you have configured the LUIS service to create and publish an intent model (see https://LUIS.ai) you will be given an Application ID GUID. Use that GUID here.

luisSubscriptionID

Once you create a LUIS account (see https://LUIS.ai) you will be given an Subscription ID. Use that secret here.

delegate

The protocol used to perform the callbacks/events during speech recognition and intent detection.

url

The endpoint with a Acoustic Model that you specially created with the Acoustic Model Specialization Service.

Return Value

the created DataRecognitionClientWithIntent.

Discussion

Creates a DataRecognitionClientWithIntent with Acoustic Model Adaptation for speech recognition and intent detection with previously acquired data, for example from a file or Bluetooth audio source.

Data is broken up into buffers and each buffer is sent to the speech recognition service.

No modification is done to the buffers; if silence detection is required, it must be performed in an external pre-processing pass over the data.

Audio data must be PCM, mono, 16-bit sample, with sample rate of 16000 Hz.

Returns speech recognition results and structured intent results. Intent results are returned in structured JSON form (see https://LUIS.ai)

Declared In

SpeechRecognitionServiceFactory.mm

+ createDataClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:

Creates a DataRecognitionClientWithIntent with Acoustic Model Adaptation for speech recognition and intent detection with previously acquired data, for example from a file or Bluetooth audio source.

+ (DataRecognitionClientWithIntent *)createDataClientWithIntent:(NSString *)language withPrimaryKey:(NSString *)primaryKey withSecondaryKey:(NSString *)secondaryKey withLUISAppID:(NSString *)luisAppID withLUISSecret:(NSString *)luisSubscriptionID withProtocol:(id<SpeechRecognitionProtocol>)delegate withUrl:(NSString *)url

Parameters

language

The language of the speech being recognized. The supported languages are:

  • en-us: American English

  • en-gb: British English

  • de-de: German

  • es-es: Spanish

  • fr-fr: French

  • it-it: Italian

  • zh-cn: Mandarin Chinese

primaryKey

The primary key. It’s a best practice that the application rotate keys periodically. Between rotations, you would disable the primary key, making the secondary key the default, giving you time to swap out the primary.

secondaryKey

The secondary key. Intended to be used when the primary key has been disabled.

luisAppID

Once you have configured the LUIS service to create and publish an intent model (see https://LUIS.ai) you will be given an Application ID GUID. Use that GUID here.

luisSubscriptionID

Once you create a LUIS account (see https://LUIS.ai) you will be given an Subscription ID. Use that secret here.

delegate

The protocol used to perform the callbacks/events during speech recognition and intent detection.

url

The endpoint with a Acoustic Model that you specially created with the Acoustic Model Specialization Service.

Return Value

the created DataRecognitionClientWithIntent.

Discussion

Creates a DataRecognitionClientWithIntent with Acoustic Model Adaptation for speech recognition and intent detection with previously acquired data, for example from a file or Bluetooth audio source.

Data is broken up into buffers and each buffer is sent to the speech recognition service.

No modification is done to the buffers; if silence detection is required, it must be performed in an external pre-processing pass over the data.

Audio data must be PCM, mono, 16-bit sample, with sample rate of 16000 Hz.

Returns speech recognition results and structured intent results. Intent results are returned in structured JSON form (see https://LUIS.ai)

Declared In

SpeechRecognitionServiceFactory.mm

+ createMicrophoneClient:withLanguage:withKey:withProtocol:

Creates a MicrophoneRecognitionClient that uses the microphone as the input source.

+ (MicrophoneRecognitionClient *)createMicrophoneClient:(SpeechRecognitionMode)speechRecognitionMode withLanguage:(NSString *)language withKey:(NSString *)primaryOrSecondaryKey withProtocol:(id<SpeechRecognitionProtocol>)delegate

Parameters

speechRecognitionMode

The speech recognition mode.

In Short Phrase mode, the client receives one final multiple N-best choice result. In Long-form Dictation mode, the client receives multiple final results, based on where the server thinks sentence pauses are.

language

The language of the speech being recognized. The supported languages are:

  • en-us: American English

  • en-gb: British English

  • de-de: German

  • es-es: Spanish

  • fr-fr: French

  • it-it: Italian

  • zh-cn: Mandarin Chinese

primaryOrSecondaryKey

The primary or the secondary key.

You should periodically renew your key to prevent unauthorized use of your subscription. The recommended approach is to acquire two keys, a primary and a secondary and to rotate key usage between these two keys. While one key is disabled, the other key will still work, allowing your application to remain active while the disabled key is replaced.

delegate

The protocol used to perform the callbacks/events upon during speech recognition.

Return Value

The created MicrophoneRecognitionClient.

Discussion

Creates a MicrophoneRecognitionClient that uses the microphone as the input source.

To initiate speech recognition, call the startMicAndRecognition method of this client. Once the microphone is turned on, data from the microphone is sent to the speech recognition service. A built-in Silence Detector is applied to the microphone data before it is sent to the recognition service. The recognition service returns only speech recognition results and does not perform intent detection. To terminate speech recognition and stop sending data to the service, call endMicAndRecognition.

Declared In

SpeechRecognitionServiceFactory.mm

+ createMicrophoneClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:

Creates a MicrophoneRecognitionClient that uses the microphone as the input source.

+ (MicrophoneRecognitionClient *)createMicrophoneClient:(SpeechRecognitionMode)speechRecognitionMode withLanguage:(NSString *)language withPrimaryKey:(NSString *)primaryKey withSecondaryKey:(NSString *)secondaryKey withProtocol:(id<SpeechRecognitionProtocol>)delegate

Parameters

speechRecognitionMode

The speech recognition mode.

In Short Phrase mode, the client receives one final multiple N-best choice result. In Long-form Dictation mode, the client receives multiple final results, based on where the server thinks sentence pauses are.

language

The language of the speech being recognized. The supported languages are:

  • en-us: American English

  • en-gb: British English

  • de-de: German

  • es-es: Spanish

  • fr-fr: French

  • it-it: Italian

  • zh-cn: Mandarin Chinese

primaryKey

The primary key. It’s a best practice that the application rotate keys periodically. Between rotations, you would disable the primary key, making the secondary key the default, giving you time to swap out the primary.

secondaryKey

The secondary key. Intended to be used when the primary key has been disabled.

delegate

The protocol used to perform the callbacks/events upon during speech recognition.

Return Value

The created MicrophoneRecognitionClient.

Discussion

Creates a MicrophoneRecognitionClient that uses the microphone as the input source.

To initiate speech recognition, call the startMicAndRecognition method of this client. Once the microphone is turned on, data from the microphone is sent to the speech recognition service. A built-in Silence Detector is applied to the microphone data before it is sent to the recognition service. The recognition service returns only speech recognition results and does not perform intent detection. To terminate speech recognition and stop sending data to the service, call endMicAndRecognition.

Declared In

SpeechRecognitionServiceFactory.mm

+ createMicrophoneClient:withLanguage:withKey:withProtocol:withUrl:

Creates a MicrophoneRecognitionClient with Acoustic Model Adaptation that uses the microphone as the input source.

+ (MicrophoneRecognitionClient *)createMicrophoneClient:(SpeechRecognitionMode)speechRecognitionMode withLanguage:(NSString *)language withKey:(NSString *)primaryOrSecondaryKey withProtocol:(id<SpeechRecognitionProtocol>)delegate withUrl:(NSString *)url

Parameters

speechRecognitionMode

The speech recognition mode.

In Short Phrase mode, the client receives one final multiple N-best choice result. In Long-form Dictation mode, the client receives multiple final results, based on where the server thinks sentence pauses are.

language

The language of the speech being recognized. The supported languages are:

  • en-us: American English

  • en-gb: British English

  • de-de: German

  • es-es: Spanish

  • fr-fr: French

  • it-it: Italian

  • zh-cn: Mandarin Chinese

primaryOrSecondaryKey

The primary or the secondary key.

You should periodically renew your key to prevent unauthorized use of your subscription. The recommended approach is to acquire two keys, a primary and a secondary and to rotate key usage between these two keys. While one key is disabled, the other key will still work, allowing your application to remain active while the disabled key is replaced.

delegate

The protocol used to perform the callbacks/events upon during speech recognition.

url

The endpoint with a Acoustic Model that you specially created with the Acoustic Model Specialization Service.

Return Value

The created MicrophoneRecognitionClient

Discussion

Creates a MicrophoneRecognitionClient with Acoustic Model Adaptation that uses the microphone as the input source.

To initiate speech recognition, call the startMicAndRecognition method of this client. Once the microphone is turned on, data from the microphone is sent to the speech recognition service. A built-in Silence Detector is applied to the microphone data before it is sent to the recognition service. The recognition service returns only speech recognition results and does not perform intent detection. To terminate speech recognition and stop sending data to the service, call endMicAndRecognition.

Declared In

SpeechRecognitionServiceFactory.mm

+ createMicrophoneClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:withUrl:

Creates a MicrophoneRecognitionClient with Acoustic Model Adaptation that uses the microphone as the input source.

+ (MicrophoneRecognitionClient *)createMicrophoneClient:(SpeechRecognitionMode)speechRecognitionMode withLanguage:(NSString *)language withPrimaryKey:(NSString *)primaryKey withSecondaryKey:(NSString *)secondaryKey withProtocol:(id<SpeechRecognitionProtocol>)delegate withUrl:(NSString *)url

Parameters

speechRecognitionMode

The speech recognition mode.

In Short Phrase mode, the client receives one final multiple N-best choice result. In Long-form Dictation mode, the client receives multiple final results, based on where the server thinks sentence pauses are.

language

The language of the speech being recognized. The supported languages are:

  • en-us: American English

  • en-gb: British English

  • de-de: German

  • es-es: Spanish

  • fr-fr: French

  • it-it: Italian

  • zh-cn: Mandarin Chinese

primaryKey

The primary key. It’s a best practice that the application rotate keys periodically. Between rotations, you would disable the primary key, making the secondary key the default, giving you time to swap out the primary.

secondaryKey

The secondary key. Intended to be used when the primary key has been disabled.

delegate

The protocol used to perform the callbacks/events upon during speech recognition.

url

The endpoint with a Acoustic Model that you specially created with the Acoustic Model Specialization Service.

Return Value

The created MicrophoneRecognitionClient

Discussion

Creates a MicrophoneRecognitionClient with Acoustic Model Adaptation that uses the microphone as the input source.

To initiate speech recognition, call the startMicAndRecognition method of this client. Once the microphone is turned on, data from the microphone is sent to the speech recognition service. A built-in Silence Detector is applied to the microphone data before it is sent to the recognition service. The recognition service returns only speech recognition results and does not perform intent detection. To terminate speech recognition and stop sending data to the service, call endMicAndRecognition.

Declared In

SpeechRecognitionServiceFactory.mm

+ createMicrophoneClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:

Creates a MicrophoneRecognitionClientWithIntent that uses the microphone as the input source.

+ (MicrophoneRecognitionClientWithIntent *)createMicrophoneClientWithIntent:(NSString *)language withKey:(NSString *)primaryOrSecondaryKey withLUISAppID:(NSString *)luisAppID withLUISSecret:(NSString *)luisSubscriptionID withProtocol:(id<SpeechRecognitionProtocol>)delegate

Parameters

language

The language of the speech being recognized. The supported languages are:

  • en-us: American English

  • en-gb: British English

  • de-de: German

  • es-es: Spanish

  • fr-fr: French

  • it-it: Italian

  • zh-cn: Mandarin Chinese

primaryOrSecondaryKey

The primary or the secondary key.

You should periodically renew your key to prevent unauthorized use of your subscription. The recommended approach is to acquire two keys, a primary and a secondary and to rotate key usage between these two keys. While one key is disabled, the other key will still work, allowing your application to remain active while the disabled key is replaced.

luisAppID

Once you have configured the LUIS service to create and publish an intent model (see https://LUIS.ai) you will be given an Application ID GUID. Use that GUID here.

luisSubscriptionID

Once you create a LUIS account (see https://LUIS.ai) you will be given an Subscription ID. Use that secret here.

delegate

The protocol used to perform the callbacks/events during speech recognition and intent detection.

Return Value

The created MicrophoneRecognitionClientWithIntent.

Discussion

Creates a MicrophoneRecognitionClientWithIntent that uses the microphone as the input source.

To initiate speech recognition, call the startMicAndRecognition method of this client. Once the microphone is turned on, data from the microphone is sent to the service. A built-in Silence Detector is applied to the microphone data before it is sent to the recognition service. The service returns speech recognition results and structured intent results. To terminate speech recognition and stop sending data to the service, call endMicAndRecognition.

The service returns structured intent results in JSON form (see [https://LUIS.ai](https://LUIS.ai)).

Declared In

SpeechRecognitionServiceFactory.mm

+ createMicrophoneClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:

Creates a MicrophoneRecognitionClientWithIntent that uses the microphone as the input source.

+ (MicrophoneRecognitionClientWithIntent *)createMicrophoneClientWithIntent:(NSString *)language withPrimaryKey:(NSString *)primaryKey withSecondaryKey:(NSString *)secondaryKey withLUISAppID:(NSString *)luisAppID withLUISSecret:(NSString *)luisSubscriptionID withProtocol:(id<SpeechRecognitionProtocol>)delegate

Parameters

language

The language of the speech being recognized. The supported languages are:

  • en-us: American English

  • en-gb: British English

  • de-de: German

  • es-es: Spanish

  • fr-fr: French

  • it-it: Italian

  • zh-cn: Mandarin Chinese

primaryKey

The primary key. It’s a best practice that the application rotate keys periodically. Between rotations, you would disable the primary key, making the secondary key the default, giving you time to swap out the primary.

secondaryKey

The secondary key. Intended to be used when the primary key has been disabled.

luisAppID

Once you have configured the LUIS service to create and publish an intent model (see https://LUIS.ai) you will be given an Application ID GUID. Use that GUID here.

luisSubscriptionID

Once you create a LUIS account (see https://LUIS.ai) you will be given an Subscription ID. Use that secret here.

delegate

The protocol used to perform the callbacks/events during speech recognition and intent detection.

Return Value

The created MicrophoneRecognitionClientWithIntent.

Discussion

Creates a MicrophoneRecognitionClientWithIntent that uses the microphone as the input source.

To initiate speech recognition, call the startMicAndRecognition method of this client. Once the microphone is turned on, data from the microphone is sent to the service. A built-in Silence Detector is applied to the microphone data before it is sent to the recognition service. The service returns speech recognition results and structured intent results. To terminate speech recognition and stop sending data to the service, call endMicAndRecognition.

The service returns structured intent results in JSON form (see [https://LUIS.ai](https://LUIS.ai)).

Declared In

SpeechRecognitionServiceFactory.mm

+ createMicrophoneClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:

Creates a MicrophoneRecognitionClientWithIntent with Acoustic Model Adaptation that uses the microphone as the input source.

+ (MicrophoneRecognitionClientWithIntent *)createMicrophoneClientWithIntent:(NSString *)language withPrimaryKey:(NSString *)primaryKey withSecondaryKey:(NSString *)secondaryKey withLUISAppID:(NSString *)luisAppID withLUISSecret:(NSString *)luisSubscriptionID withProtocol:(id<SpeechRecognitionProtocol>)delegate withUrl:(NSString *)url

Parameters

language

The language of the speech being recognized. The supported languages are:

  • en-us: American English

  • en-gb: British English

  • de-de: German

  • es-es: Spanish

  • fr-fr: French

  • it-it: Italian

  • zh-cn: Mandarin Chinese

primaryKey

The primary key. It’s a best practice that the application rotate keys periodically. Between rotations, you would disable the primary key, making the secondary key the default, giving you time to swap out the primary.

secondaryKey

The secondary key. Intended to be used when the primary key has been disabled.

luisAppID

Once you have configured the LUIS service to create and publish an intent model (see https://LUIS.ai) you will be given an Application ID GUID. Use that GUID here.

luisSubscriptionID

Once you create a LUIS account (see https://LUIS.ai) you will be given an Subscription ID. Use that secret here.

delegate

The protocol used to perform the callbacks/events during speech recognition and intent detection.

url

The endpoint with a Acoustic Model that you specially created with the Acoustic Model Specialization Service.

Return Value

The created MicrophoneRecognitionClientWithIntent.

Discussion

Creates a MicrophoneRecognitionClientWithIntent with Acoustic Model Adaptation that uses the microphone as the input source.

To initiate speech recognition, call the startMicAndRecognition method of this client. Once the microphone is turned on, data from the microphone is sent to the service. A built-in Silence Detector is applied to the microphone data before it is sent to the recognition service. The service returns speech recognition results and structured intent results. To terminate speech recognition and stop sending data to the service, call endMicAndRecognition.

The service returns structured intent results in JSON form (see [https://LUIS.ai](https://LUIS.ai)).

Declared In

SpeechRecognitionServiceFactory.mm

+ createMicrophoneClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:

Creates a MicrophoneRecognitionClientWithIntent with Acoustic Model Adaptation that uses the microphone as the input source.

+ (MicrophoneRecognitionClientWithIntent *)createMicrophoneClientWithIntent:(NSString *)language withKey:(NSString *)primaryOrSecondaryKey withLUISAppID:(NSString *)luisAppID withLUISSecret:(NSString *)luisSubscriptionID withProtocol:(id<SpeechRecognitionProtocol>)delegate withUrl:(NSString *)url

Parameters

language

The language of the speech being recognized. The supported languages are:

  • en-us: American English

  • en-gb: British English

  • de-de: German

  • es-es: Spanish

  • fr-fr: French

  • it-it: Italian

  • zh-cn: Mandarin Chinese

primaryOrSecondaryKey

The primary or the secondary key.

You should periodically renew your key to prevent unauthorized use of your subscription. The recommended approach is to acquire two keys, a primary and a secondary and to rotate key usage between these two keys. While one key is disabled, the other key will still work, allowing your application to remain active while the disabled key is replaced.

luisAppID

Once you have configured the LUIS service to create and publish an intent model (see https://LUIS.ai) you will be given an Application ID GUID. Use that GUID here.

luisSubscriptionID

Once you create a LUIS account (see https://LUIS.ai) you will be given an Subscription ID. Use that secret here.

delegate

The protocol used to perform the callbacks/events during speech recognition and intent detection.

url

The endpoint with a Acoustic Model that you specially created with the Acoustic Model Specialization Service.

Return Value

The created MicrophoneRecognitionClientWithIntent.

Discussion

Creates a MicrophoneRecognitionClientWithIntent with Acoustic Model Adaptation that uses the microphone as the input source.

To initiate speech recognition, call the startMicAndRecognition method of this client. Once the microphone is turned on, data from the microphone is sent to the service. A built-in Silence Detector is applied to the microphone data before it is sent to the recognition service. The service returns speech recognition results and structured intent results. To terminate speech recognition and stop sending data to the service, call endMicAndRecognition.

The service returns structured intent results in JSON form (see [https://LUIS.ai](https://LUIS.ai)).

Declared In

SpeechRecognitionServiceFactory.mm