mcuPicovoice Platform - Microcontroller Quick Start

Platforms

  • Arm Cortex-M4
  • Arm Cortex-M7

Requirement

  • C99-compatible compiler

Picovoice Account & AccessKey

  1. Login or signup for a free account on the Picovoice Console.
  2. Go to the AccessKey tab to create one or use an existing AccessKey. Be sure to keep your AccessKey secret.

Quick Start

Setup

  1. Clone the repository:
git clone --recurse-submodules https://github.com/Picovoice/picovoice.git

Usage

  1. Include the public header files.
  2. Link the project to an appropriate library file.
  3. Construct the Picovoice object:
#define MEMORY_BUFFER_SIZE ${MEMORY_BUFFER_SIZE}
static const char* ACCESS_KEY = "${ACCESS_KEY}";
static uint8_t memory_buffer[MEMORY_BUFFER_SIZE] __attribute__((aligned(16)));
static const uint8_t *keyword_array = ${KEYWORD_ARRAY};
const float porcupine_sensitivity = 0.5f;
static void wake_word_callback(void) {
// logic to execute upon detection of wake word
}
static const uint8_t *context_array = ${CONTEXT_ARRAY};
const float rhino_sensitivity = 0.75f;
static void inference_callback(pv_inference_t *inference) {
// `inference` exposes three immutable properties:
// (1) `IsUnderstood`
// (2) `Intent`
// (3) `Slots`
// ..
pv_inference_delete(inference);
}
pv_picovoice_t *picovoice = NULL;
const pv_status_t status = pv_picovoice_init(
ACCESS_KEY,
MEMORY_BUFFER_SIZE,
memory_buffer,
sizeof(keyword_array),
keyword_array,
porcupine_sensitivity,
wake_word_callback,
sizeof(context_array),
context_array,
rhino_sensitivity,
true,
inference_callback,
&picovoice);
if (status != PV_STATUS_SUCCESS) {
// error handling logic
}

Pass in frames of audio to the pv_picovoice_process function:

extern const int16_t *get_next_audio_frame(void);
while (true) {
const int16_t *pcm = get_next_audio_frame();
const pv_status_t status = pv_picovoice_process(picovoice, pcm);
if (status != PV_STATUS_SUCCESS) {
// error handling logic
}
}

Release resources explicitly when done with Picovoice:

pv_picovoice_delete(picovoice);

Create Custom Keyword and Context Models

  1. Obtain the UUID of the chipset.
  2. Go to Picovoice Console to create models for Porcupine wake word engine and Rhino Speech-to-Intent engine.
  3. Select Arm Cortex-M as the platform when training the model.
  4. Select appropriate board type.
  5. Train your models.
  6. Download your custom voice model(s).
  7. Decompress the zip file. The model file is either .ppn for Porcupine wake word or .rhn for Rhino Speech-to-Intent. Both zip archives also contain a .h header file containing the C array version of the binary model.
  8. Copy the contents of the arrays inside the .h header files and update the keyword_array and context_array values.

Non-English Languages

Use the corresponding library file (.a) to process non-English wake words and contexts. The library files for all supported languages are available on the Picovoice GitHub repository.

Demo

For the Picovoice mcu SDK, we offer demo projects for several evaluation boards to demonstrate how to use the Picovoice Platform on microcontrollers. The full list of supported boards are available here.

Setup

Clone the repository:

git clone --recurse-submodules https://github.com/Picovoice/picovoice.git

Usage

Resources

API

GitHub

Benchmark

Videos


Issue with this doc? Please let us know.