# STP Plugin: Microsoft Cognitive Services Speech

The Sketch-thru-Plan (STP) recognizer can employ transcribed speech generated by potentially different recognizers. To promote code reuse and make it possible to more easily swap recognizers, the functionality should be packaged as a plugin that conforms to a well-known interface. 

This plugin is implemented based on Microsoft's Cognitive Services Speech to Text. It implements two different strategies:

## Prerequisites

You will need an Azure Speech Services subscription key and region. Azure Speech is a fully managed service — once the resource is created, you can start using it immediately.

### Obtaining Azure Speech credentials

1. Go to the **Azure Portal** → **Create a resource** → search for **Speech** → **Create**
2. Select a **Subscription**, **Resource group**, and **Region** (e.g. `eastus`)
3. Choose a **Pricing tier** (the **Free (F0)** tier provides 5 hours/month of speech-to-text at no cost)
4. Once the resource is created, go to **Keys and Endpoint**
5. Copy **Key 1** (or Key 2) — this is your **subscription key**
6. Note the **Location/Region** (e.g. `eastus`) — this is your **service region**

These can be passed to the plugin constructor directly, or via querystring parameters in the samples (e.g. `?azkey=...&azregion=eastus`).

> **Tip:** For production applications, consider using **Azure Active Directory (AAD) token-based authentication** instead of embedding subscription keys in client-side code.


## Accessing the plugin functionality

You can get the plugin from npm:

```
npm install --save @hyssostech/azurespeech-plugin
```

Or you can embed directly as a script using [`jsdelivr`](https://www.jsdelivr.com/package/npm/@hyssostech/azurespeech-plugin). As always, it is recommended that a specific version be used rather than `@latest` to prevent breaking changes from affecting existing code

```html
<!-- Include _after_ the external services such as the Microsoft Cognitive Services Speech -->
<script src="https://cdn.jsdelivr.net/npm/@hyssostech/azurespeech-plugin@latest/dist/stpazurespeech-bundle-min.js"></script>
```

## Referencing the plugin

The plugin is built as a `UMD` library, and is therefore compatible with plain vanilla (IIFE), AMD and CommonJS. Also included is an ESM bundle (`stpazurespeech-bundle.esm.js`).

When used in vanilla javascript, an `StpAS` exported global can be used to access the SDK types:

```javascript
const stpConn = new StpAS.AzureSpeechRecognizer(azureSubscriptionKey, azureServiceRegion);

```
In typescript, import `@hyssostech/azurespeech-plugin` after installing via npm:

```javascript
import * as StpAS from "@hyssostech/azurespeech-plugin";
const stpConn = new StpAS.AzureSpeechRecognizer(azureSubscriptionKey, azureServiceRegion);
```

Or import individual types:

```javascript
import { StpazurespeechConnector } from "@hyssostech/azurespeech-plugin";
const stpConn = new AzureSpeechRecognizer(azureSubscriptionKey, azureServiceRegion);
```

## Examples

* This single-shot recognition approach is used in the [quickstart](../../../quickstart/README.md)
* The recognition over the duration of sketching is used in the [samples](../../../samples/README.md)


## Building the project

The repository includes a pre-built `dist` folder that can be used directly for testing. If changes are made to the sample and there is a need to rebuild, run:

```
npm install
npm run build
```

## Documentation

Additional documentation can be found in the generated `docs` folder.
