Rhino - Raspberry Pi Quick Start
Requirements
- Python 3 (or NodeJS 10+)
Compatibility
- Raspberry Pi (all variants)
Setup
Cloning the Repository
If using SSH, clone the repository with:
git clone --recurse-submodules [email protected]:Picovoice/rhino.git
If using HTTPS, then type
git clone --recurse-submodules https://github.com/Picovoice/rhino.git
Microphone
Connect the microphone and get the list of available input audio devices:
arecord -L
The output will be similar to below
nullDiscard all samples (playback) or generate zero samples (capture)defaultmicsysdefault:CARD=DeviceUSB PnP Sound Device, USB AudioDefault Audio Devicehw:CARD=Device,DEV=0USB PnP Sound Device, USB AudioDirect hardware device without any conversionsplughw:CARD=Device,DEV=0USB PnP Sound Device, USB AudioHardware device with all software conversions
In this case, we pick plughw:CARD=Device,DEV=0
. Note that this device comes with software conversions which are handy
for resampling. In what follows we note this value as ${INPUT_AUDIO_DEVICE}
.
create ~/.asoundrc
pcm.!default {type asymcapture.pcm "mic"}pcm.mic {type plugslave {pcm ${INPUT_AUDIO_DEVICE}}}
If you have a speaker add a section for that to ~/.asoundrc
as well.
Check if the microphone works properly by recording audio into a file:
arecord --format=S16_LE --duration=5 --rate=16000 --file-type=wav ~/test.wav
If the command above executes without any errors, then the microphone is functioning as expected. We recommend inspecting the recorded file for recording side effects such as clipping.
Installation
The core of the Speech-to-Intent engine is shipped as a pre-compiled ANSI C library. Hence, it can be used within a C/C++ application directory or in a high-level language such as Python via its bindings.
Python
Install PyAudio using
sudo apt-get install python3-pyaudio
Then install the demo package:
sudo pip3 install pvrhinodemo
From the root of the repository, run the microphone demo application. It opens an input audio stream, monitors it using Picovoice wake word detection engine, and when the wake phrase ("Picovoice") is detected it will extract the intent within the follow-up spoken command using Speech-to-Intent engine.
rhino_demo_mic --context_path resources/contexts/raspberry-pi/smart_lighting_raspberry-pi.rhn
Now you can say something like "turn on the lights in the kitchen" and it outputs the result of inference into terminal
{intent : 'changeLightState'slots : {state : 'on'location : 'kitchen'}}
C
Install ALSA development library
sudo apt-get install libasound-dev
Change the current directory to the root of the repository and compile the C demo application.
gcc -O3 -o demo/c/rhino_demo_mic -I include demo/c/rhino_demo_mic.c -ldl -lasound -std=c99
Then run the demo. It opens an input audio stream and extracts the intent within the spoken command using Speech-to-Intent engine. Replace ${CPU}
in the command below based on the trim of Raspberry Pi (cortex-a72
for Raspberry Pi 4, cortex-a53
for Raspberry Pi 3, cortex-a7
for Raspberry Pi 2, and arm11
for the rest) and run the demo
demo/c/rhino_demo_mic \lib/raspberry-pi/${CPU}/libpv_rhino.so \lib/common/rhino_params.pv \resources/contexts/raspberry-pi/smart_lighting_raspberry-pi.rhn \${INPUT_AUDIO_DEVICE}
Now you can say something like "turn on the lights in the kitchen" and it outputs the result of inference into terminal
{intent : 'changeLightState'slots : {state : 'on'location : 'kitchen'}}
Custom Context
You can create custom Rhino context models using Picovoice Console.