cPicovoice Platform — C Quick Start

Platforms

  • Linux (x86_64)
  • macOS (x86_64, arm64)
  • Windows (x86_64)
  • BeagleBone
  • NVIDIA Jetson Nano
  • Raspberry Pi (Zero, 2, 3, 4)

Requirements

  • C99-compatible compiler
  • CMake (3.4+)
  • For Windows Only: MinGW is required to build the demo

Picovoice Account & AccessKey

  1. Login or signup for a free account on the Picovoice Console.
  2. Go to the AccessKey tab to create one or use an existing AccessKey. Be sure to keep your AccessKey secret.

Setup

  1. Clone the repository:
git clone --recurse-submodules https://github.com/Picovoice/picovoice.git

Usage

  1. Include the public header files (picovoice.h and pv_picovoice.h).
  2. Link the project to an appropriate precompiled library for the target platform and load it.
  3. Download language models for Porcupine and Rhino.
  4. Download a keyword file based on the desired language and the target platform.
  5. Download a context file based on the desired language and the target platform.
  6. Construct the Picovoice object:
static const char* ACCESS_KEY = "${ACCESS_KEY}";
const char *porcupine_model_path = "${PPN_FILE_PATH}";
const char *keyword_path = "${KEYWORD_FILE_PATH}";
const float porcupine_sensitivity = 0.5f;
void wake_word_callback(void) {
// logic to execute upon detection of wake word
}
const char *rhino_model_path = "${RHN_FILE_PATH}"
const char *context_path = "${CONTEXT_FILE_PATH}";
const float rhino_sensitivity = 0.75f
void inference_callback(pv_inference_t *inference) {
if (inference->is_understood) {
if (inference->num_slots > 0) {
for (int32_t i = 0; i < inference->num_slots; i++) {
# take action based on intent and slot values
}
}
} else {
# unsupported command
}
pv_inference_delete(inference);
}
pv_picovoice_t *picovoice = NULL;
const pv_status_t status = pv_picovoice_init(
ACCESS_KEY,
porcupine_model_path,
keyword_path,
porcupine_sensitivity,
wake_word_callback,
rhino_model_path,
context_path,
rhino_sensitivity,
true,
inference_callback,
&picovoice);
if (status != PV_STATUS_SUCCESS) {
// error handling logic
}
  1. Pass in frames of audio to the pv_picovoice_process_func function:
extern const int16_t *get_next_audio_frame(void);
while (true) {
const int16_t *pcm = get_next_audio_frame();
const pv_status_t status = pv_picovoice_process(picovoice, pcm);
if (status != PV_STATUS_SUCCESS) {
// error handling logic
}
}
  1. Release resources explicitly when done with Picovoice:
pv_picovoice_delete(picovoice);

Custom Wake Words & Contexts

Create custom wake word and context files using the Picovoice Console. Download the custom models (.ppn and .rhn) and create an instance of Picovoice using the custom models.

Non-English Languages

Use the corresponding model file (.pv) to process non-English wake words and contexts. The model files for all supported languages are available on the Porcupine GitHub repository and the Rhino GitHub repository.

Demo

For the Picovoice C SDK, we offer demo applications that demonstrate how to use the Picovoice Platform audio streams on real-time audio streams (i.e. microphone input) and audio files.

Setup

  1. Clone the repository:
git clone --recurse-submodules https://github.com/Picovoice/picovoice.git
  1. Build the microphone demo:
cd picovoice
cmake -S demo/c/. -B demo/c/build && cmake --build demo/c/build --target picovoice_demo_mic

Usage

To see the usage options for the demo:

./demo/c/build/picovoice_demo_mic

Ensure you have a working microphone connected to your system and run the following command to infer intent from spoken commands:

./demo/c/build/picovoice_demo_mic \
-a ${ACCESS_KEY}
-l sdk/c/lib/${PLATFORM}/${ARCH}/libpicovoice.so \
-p resources/porcupine/lib/common/porcupine_params.pv \
-k resources/porcupine/resources/keyword_files/${PLATFORM}/picovoice_${PLATFORM}.ppn \
-r resources/rhino/lib/common/rhino_params.pv \
-c resources/rhino/resources/contexts/${PLATFORM}/smart_lighting_${PLATFORM}.rhn \
-i {AUDIO_DEVICE_INDEX}

For more information on our Picovoice demos for C, head over to our GitHub repository.

Resources

API

GitHub

Benchmark


Issue with this doc? Please let us know.