cRhino - C Quick Start

Platforms

  • Linux (x86_64)
  • macOS (x86_64, arm64)
  • Windows (x86_64)
  • BeagleBone
  • NVIDIA Jetson Nano
  • Raspberry Pi (Zero, 2, 3, 4)

Requirements

  • C99-compatible compiler
  • CMake (3.4+)
  • For Windows Only: MinGW is required to build the demo

Picovoice Account & AccessKey

  1. Login or signup for a free account on the Picovoice Console.
  2. Go to the AccessKey tab to create one or use an existing AccessKey. Be sure to keep your AccessKey secret.

Setup

  1. Clone the repository:
git clone --recurse-submodules https://github.com/Picovoice/rhino.git

Usage

  1. Include the public header files (picovoice.h and pv_rhino.h).
  2. Link the project to an appropriate precompiled library for the target platform and load it.
  3. Download a language model.
  4. Download a context file based on the desired language and the target platform.
  5. Construct the Rhino object:
static const char* ACCESS_KEY = "${ACCESS_KEY}";
const char *model_file_path = "${MODEL_FILE_PATH}";
const char *context_file_path = "${CONTEXT_FILE_PATH}";
const float sensitivity = 0.5f;
pv_rhino_t *rhino;
const pv_status_t status = pv_rhino_init(
model_file_path,
context_file_path,
&sensitivity,
true,
&rhino);
if (status != PV_STATUS_SUCCESS) {
// add error handling code
}
  1. Pass in frames of audio to the pv_rhino_process function:
extern const int16_t *get_next_audio_frame(void);
while (true) {
const int16_t *pcm = get_next_audio_frame();
bool is_finalized;
pv_status_t status = pv_rhino_process(rhino, pcm, &is_finalized);
if (status != PV_STATUS_SUCCESS) {
// add error handling code
}
if (is_finalized) {
bool is_understood;
status = pv_rhino_is_understood(rhino, &is_understood);
if (status != PV_STATUS_SUCCESS) {
// add error handling code
}
if (is_understood) {
const char *intent;
int num_slots;
const char **slots;
const char **values;
status = pv_rhino_get_intent(
rhino,
&intent,
&num_slots,
&slots,
&values);
if (status != PV_STATUS_SUCCESS) {
// add error handling code
}
// add code to take action based on inferred intent and slot values
pv_rhino_free_slots_and_values(rhino, slots, values);
} else {
// add code to handle unsupported commands
}
pv_rhino_reset(rhino);
}
}
  1. Release resources explicitly when done with Rhino:
pv_rhino_delete(rhino);

Custom Contexts

Create custom context using the Picovoice Console. Download the custom context file (.rhn) and create an instance of Rhino using the custom context model.

Non-English Languages

Use the corresponding model file (.pv) to detect non-English wake words. The model files for all supported languages are available on the Rhino GitHub repository.

Demo

For the Rhino C SDK, we offer demo applications that demonstrate how to use the Speech-to-Intent engine on real-time audio streams (i.e. microphone input) and audio files.

Setup

  1. Clone the repository:
git clone --recurse-submodules https://github.com/Picovoice/rhino.git
  1. Build the microphone demo:
cd rhino
cmake -S demo/c/. -B demo/c/build && cmake --build demo/c/build --target rhino_demo_mic

Usage

To see the usage options for the demo:

./demo/c/build/rhino_demo_mic

Ensure you have a working microphone connected to your system and run the following command to infer intent from spoken commands:

./demo/c/build/rhino_demo_mic \
-l lib/${PLATFORM}/${ARCH}/libpv_rhino.so \
-m lib/common/rhino_params.pv \
-c resources/contexts/${PLATFORM}/smart_lighting_${PLATFORM}.rhn \
-d ${AUDIO_DEVICE_INDEX} \
-a ${ACCESS_KEY}

For more information on our Rhino demos for C, head over to our GitHub repository.

Resources

API

GitHub

Benchmark


Issue with this doc? Please let us know.