cRhino - C Quick Start

  • Speech-to-Intent Engine
  • Domain Specific NLU
  • Offline NLU
  • Local Voice Recognition
  • Raspberry Pi
  • Beaglebone
  • NVIDIA Jetson
  • Linux
  • macOS
  • Windows
  • C

Requirements

  • C99-compatible compiler
  • CMake 3.4 or higher
  • For Windows Only: MinGW is required to build the demo

Compatibility

  • Linux (x86_64)
  • macOS (x86_64)
  • Windows (x86_64)
  • Raspberry Pi (all variants)
  • NVIDIA Jetson (Nano)
  • BeagleBone

Setup

Cloning the Repository

If using SSH, clone the repository with:

git clone --recurse-submodules [email protected]:Picovoice/rhino.git

If using HTTPS, then type

git clone --recurse-submodules https://github.com/Picovoice/rhino.git

Microphone Demo

Build

First go to the root of the directory:

cd rhino

Build the microphone demo:

cmake -S demo/c/. -B demo/c/build && cmake --build demo/c/build --target rhino_demo_mic

List the available input devices by running the command below.

Linux, macOS, Raspberry Pi, BeagleBone, Jetson

./demo/c/build/rhino_demo_mic --show_audio_devices

Windows

.\\demo\\c\\build\\rhino_demo_mic.exe --show_audio_devices

Remember the index of your preferred audio device. We will refer this as ${AUDIO_DEVICE_INDEX} for the following instructions.

Run

The following commands start up a microphone audio stream and infers follow-on commands within the context of a smart lighting system. Replace ${AUDIO_DEVICE_INDEX} with the index of the audio device.

Linux

./demo/c/build/rhino_demo_mic lib/linux/x86_64/libpv_rhino.so lib/common/rhino_params.pv \
resources/contexts/linux/smart_lighting_linux.rhn ${AUDIO_DEVICE_INDEX}

macOS

./demo/c/build/rhino_demo_mic lib/mac/x86_64/libpv_rhino.dylib lib/common/rhino_params.pv \
resources/contexts/mac/smart_lighting_mac.rhn ${AUDIO_DEVICE_INDEX}

Raspberry Pi

Replace ${CPU} in the command below based on the trim of Raspberry Pi (cortex-a72 for Raspberry Pi 4, cortex-a72-aarch64 for Raspberry Pi 4 8GB RAM, cortex-a53 for Raspberry Pi 3, cortex-a53-aarch64 for Raspberry Pi 3 8GB RAM, cortex-a7 for Raspberry Pi 2, and arm11 for the rest):

./demo/c/build/rhino_demo_mic lib/raspberry-pi/${CPU}/libpv_rhino.so lib/common/rhino_params.pv \
resources/contexts/raspberry-pi/smart_lighting_raspberry-pi.rhn ${AUDIO_DEVICE_INDEX}

BeagleBone

./demo/c/build/rhino_demo_mic lib/beaglebone/libpv_rhino.so lib/common/rhino_params.pv \
resources/contexts/beaglebone/smart_lighting_beaglebone.rhn ${AUDIO_DEVICE_INDEX}

Jetson

./demo/c/build/rhino_demo_mic lib/jetson/cortex-a57-aarch64/libpv_rhino.so lib/common/rhino_params.pv \
resources/contexts/jetson/smart_lighting_jetson.rhn ${AUDIO_DEVICE_INDEX}

Windows

.\\demo\\c\\build\\rhino_demo_mic.exe lib/windows/amd64/libpv_rhino.dll lib/common/rhino_params.pv resources/contexts/windows/smart_lighting_windows.rhn ${AUDIO_DEVICE_INDEX}

Once the demo is running, it will start listening for context. For example, you can say:

"Turn on the lights."

If understood correctly, the following prints to the console:

{
'is_understood' : 'true',
'intent' : 'changeLightState',
'slots' : {
'state' : 'on',
}
}

File Demo

Build

First go to the root of the directory:

cd rhino

Build the file demo:

cmake -S demo/c/. -B demo/c/build && cmake --build demo/c/build --target rhino_demo_file

Run

The following processes a WAV file under the audio_samples directory (located under ./resources/) and infers the intent in the context of a coffee maker system.

Linux

./demo/c/build/rhino_demo_file lib/linux/x86_64/libpv_rhino.so lib/common/rhino_params.pv \
resources/contexts/linux/coffee_maker_linux.rhn resources/audio_samples/test_within_context.wav

macOS

./demo/c/build/rhino_demo_file lib/mac/x86_64/libpv_rhino.dylib lib/common/rhino_params.pv \
resources/contexts/mac/coffee_maker_mac.rhn resources/audio_samples/test_within_context.wav

Raspberry Pi

Replace ${CPU} in the command below based on the trim of Raspberry Pi (cortex-a72 for Raspberry Pi 4, cortex-a72-aarch64 for Raspberry Pi 4 8GB RAM, cortex-a53 for Raspberry Pi 3, cortex-a53-aarch64 for Raspberry Pi 3 8GB RAM, cortex-a7 for Raspberry Pi 2, and arm11 for the rest):

./demo/c/build/rhino_demo_file lib/raspberry-pi/${CPU}/libpv_rhino.so lib/common/rhino_params.pv \
resources/contexts/raspberry-pi/coffee_maker_raspberry-pi.rhn resources/audio_samples/test_within_context.wav

BeagleBone

./demo/c/build/rhino_demo_file lib/beaglebone/libpv_rhino.so lib/common/rhino_params.pv \
resources/contexts/beaglebone/coffee_maker_beaglebone.rhn resources/audio_samples/test_within_context.wav

Jetson

./demo/c/build/rhino_demo_file lib/jetson/cortex-a57-aarch64/libpv_rhino.so lib/common/rhino_params.pv \
resources/contexts/jetson/coffee_maker_jetson.rhn resources/audio_samples/test_within_context.wav

Windows

.\\demo\\c\\build\\rhino_demo_file.exe lib/windows/amd64/libpv_rhino.dll lib/common/rhino_params.pv resources/contexts/windows/coffee_maker_windows.rhn resources/audio_samples/test_within_context.wav

The following prints to the console:

{
'is_understood' : 'true',
'intent' : 'orderBeverage'
'slots' : {
'size' : 'medium',
'numberOfShots' : 'double shot',
'beverage' : 'americano',
}
}
real time factor : 0.011

Custom Context

You can create custom Rhino context models using Picovoice Console.


Issue with this doc? Please let us know.