raspberry-piPicovoice Platform — Raspberry Pi Quick Start

  • End-to-End Voice Platform
  • Offline Voice Recognition
  • Local Speech Recognition
  • Speech-to-Intent
  • Domain-Specific NLU
  • Wake Word Detection
  • Raspberry Pi
  • Linux
  • Python

Requirements

Python:

  • Python 3

Node:

  • NodeJS 10+

C:

  • C99-compatible compiler
  • CMake 3.4 or higher

Compatibility

  • Raspberry Pi (all variants)

Setup

Cloning the Repository

If using SSH, clone the repository with:

git clone --recurse-submodules [email protected]:Picovoice/picovoice.git

If using HTTPS, then type:

git clone --recurse-submodules https://github.com/Picovoice/picovoice.git

Microphone

Connect the microphone and get the list of available input audio devices:

arecord -L

The output will be similar to below

null
    Discard all samples (playback) or generate zero samples (capture)
default
mic
sysdefault:CARD=Device
    USB PnP Sound Device, USB Audio
    Default Audio Device
hw:CARD=Device,DEV=0
    USB PnP Sound Device, USB Audio
    Direct hardware device without any conversions
plughw:CARD=Device,DEV=0
    USB PnP Sound Device, USB Audio
    Hardware device with all software conversions

In this case, we pick plughw:CARD=Device,DEV=0. Note that this device comes with software conversions which are handy for resampling. In what follows we note this value as ${INPUT_AUDIO_DEVICE}.

create ~/.asoundrc

pcm.!default {
type asym
capture.pcm "mic"
}
pcm.mic {
type plug
slave {
pcm "${INPUT_AUDIO_DEVICE}"
}
}

If you have a speaker add a section for that to ~/.asoundrc as well.

Check if the microphone works properly by recording audio into a file:

arecord --format=S16_LE --duration=5 --rate=16000 --file-type=wav ~/test.wav

If the command above executes without any errors, then the microphone is functioning as expected. We recommend inspecting the recorded file for recording side effects such as clipping.

Demo Applications

You can use Picovoice using multiple SDKs on Raspberry Pi: Python, NodeJS, C, Android, WebAssembly, etc. This article covers Python and NodeJS.

Python

Install PyAudio:

sudo apt-get install python3-pyaudio

Then install the package:

sudo pip3 install picovoicedemo

Run the demo from a terminal:

picovoice_demo_mic \
--keyword_path ${PATH_TO_PORCUPINE_KEYWORD_FILE} \
--context_path ${PATH_TO_RHINO_CONTEXT_FILE)}

The demo reads audio from the microphone, processes it in real-time, and outputs to the terminal when a wake word is detected or the user's intent is inferred from a follow-on voice command.

If you do not have custom Porcupine and Rhino models, you can use the pre-trained ones available in the repository. From the root of the cloned repository, run the demo:

picovoice_demo_mic \
--keyword_path resources/porcupine/resources/keyword_files/raspberry-pi/porcupine_raspberry-pi.ppn \
--context_path resources/rhino/resources/contexts/raspberry-pi/smart_lighting_raspberry-pi.rhn

With a working microphone connected, say:

Porcupine, set the lights in the living room to purple.

NodeJS

To install the demos and make them available on the command line, use either of the following yarn or npm commands:

yarn global add @picovoice/picovoice-node-demo

or

npm install -g @picovoice/picovoice-node-demo

Run the demo from a terminal:

pv-mic-demo \
--keyword_file_path ${PATH_TO_PORCUPINE_KEYWORD_FILE} \
--context_file_path ${PATH_TO_RHINO_CONTEXT_FILE)}

The demo reads audio from microphone, processes it in real-time, and outputs to the terminal when a wake word is detected or the user's intent is inferred from a follow-on voice command.

If you don't have custom Porcupine and Rhino models, you can use the pre-trained ones available in the repository. From the root of the cloned repository run the demo:

pv-mic-demo \
--keyword_file_path resources/porcupine/resources/keyword_files/raspberry-pi/porcupine_raspberry-pi.ppn \
--context_file_path resources/rhino/resources/contexts/raspberry-pi/smart_lighting_raspberry-pi.rhn

With a working microphone connected, say:

Porcupine, set the lights in the living room to purple.

C

Go to the root of the directory:

cd picovoice

Compile the C demo application with CMake:

cmake -S demo/c/. -B demo/c/build && cmake --build demo/c/build --target picovoice_demo_mic

List input audio devices with:

./demo/c/build/picovoice_demo_mic --show_audio_devices

Replace ${CPU} in the command below based on the trim of Raspberry Pi (cortex-a72 for Raspberry Pi 4, cortex-a53 for Raspberry Pi 3, cortex-a7 for Raspberry Pi 2, and arm11 for the rest), ${AUDIO_DEVICE_INDEX} with the index of the audio device selected based on the previous command and run the demo:

./demo/c/build/picovoice_demo_mic \
sdk/c/lib/raspberry-pi/${CPU}/libpicovoice.so \
resources/porcupine/lib/common/porcupine_params.pv \
resources/porcupine/resources/keyword_files/raspberry-pi/porcupine_raspberry-pi.ppn \
0.5 \
resources/rhino/lib/common/rhino_params.pv \
resources/rhino/resources/contexts/raspberry-pi/smart_lighting_raspberry-pi.rhn \
0.5 \
{AUDIO_DEVICE_INDEX}

The demo continuously processes audio from the microphone. First, it listens for the wake word phrase: when it hears "porcupine", the following prints to console:

[wake word]

After the wake word is detected, you may say follow-on voice commands (in the context of smart lighting). For example:

"Set the lights in the living room to purple."

Create Custom Wake Words & Contexts

You can create custom Porcupine wake word and Rhino context models using Picovoice Console.


Issue with this doc? Please let us know.