raspberry-piPicovoice Platform — Raspberry Pi Quick Start

  • End-to-End Voice Platform
  • Offline Voice Recognition
  • Local Speech Recognition
  • Speech-to-Intent
  • Domain-Specific NLU
  • Wake Word Detection
  • Raspberry Pi
  • BeagleBone
  • Linux
  • macOS
  • Windows
  • Python

Requirements

  • Python3 (or NodeJS 10+)

Compatibility

  • Raspberry Pi (all variants)

Setup

Cloning the Repository

If using SSH, clone the repository with:

git clone --recurse-submodules [email protected]:Picovoice/picovoice.git

If using HTTPS, then type

git clone --recurse-submodules https://github.com/Picovoice/picovoice.git

Microphone

Connect the microphone and get the list of available input audio devices:

arecord -L

The output will be similar to below

null
    Discard all samples (playback) or generate zero samples (capture)
default
mic
sysdefault:CARD=Device
    USB PnP Sound Device, USB Audio
    Default Audio Device
hw:CARD=Device,DEV=0
    USB PnP Sound Device, USB Audio
    Direct hardware device without any conversions
plughw:CARD=Device,DEV=0
    USB PnP Sound Device, USB Audio
    Hardware device with all software conversions

In this case, we pick plughw:CARD=Device,DEV=0. Note that this device comes with software conversions which are handy for resampling. In what follows we note this value as ${INPUT_AUDIO_DEVICE}.

create ~/.asoundrc

pcm.!default {
type asym
capture.pcm "mic"
}
pcm.mic {
type plug
slave {
pcm ${INPUT_AUDIO_DEVICE}
}
}

If you have a speaker add a section for that to ~/.asoundrc as well.

Check if the microphone works properly by recording audio into a file:

arecord --format=S16_LE --duration=5 --rate=16000 --file-type=wav ~/test.wav

If the command above executes without any errors, then the microphone is functioning as expected. We recommend inspecting the recorded file for recording side effects such as clipping.

Demo Applications

You can use Picovoice using multiple SDKs on Raspberry Pi: Python, NodeJS, C, Android, WebAssembly, etc. Below we look into Python and NodeJS.

Python Demos

Install PyAudio:

sudo apt-get install python3-pyaudio

Then install the package:

sudo pip3 install picovoicedemo

Run the demo from a terminal:

picovoice_demo_mic \
--keyword_path ${PATH_TO_PORCUPINE_KEYWORD_FILE} \
--context_path ${PATH_TO_RHINO_CONTEXT_FILE)}

The demo reads audio from microphone, processes it in real-time, and outputs into the terminal when a wake word is detected or the user's intent is inferred from a follow-on voice command.

If you don't have custom Porcupine and Rhino models, you can use the pre-trained ones available in the repository. From the root of the cloned repository run the demo:

picovoice_demo_mic \
--keyword_path resources/porcupine/resources/keyword_files/raspberry-pi/porcupine_raspberry-pi.ppn \
--context_path resources/rhino/resources/contexts/raspberry-pi/smart_lighting_raspberry-pi.rhn

With a working microphone connected, say:

Porcupine, set the lights in the living room to purple.

NodeJS Demos

To install the demos and make them available on the command line, use either of the following yarn or npm commands:

yarn global add @picovoice/picovoice-node-demo

or

npm install -g @picovoice/picovoice-node-demo

Run the demo from a terminal:

pv-mic-demo \
--keyword_file_path ${PATH_TO_PORCUPINE_KEYWORD_FILE} \
--context_file_path ${PATH_TO_RHINO_CONTEXT_FILE)}

The demo reads audio from microphone, processes it in real-time, and outputs into the terminal when a wake word is detected or the user's intent is inferred from a follow-on voice command.

If you don't have custom Porcupine and Rhino models, you can use the pre-trained ones available in the repository. From the root of the cloned repository run the demo:

pv-mic-demo \
--keyword_file_path resources/porcupine/resources/keyword_files/raspberry-pi/porcupine_raspberry-pi.ppn \
--context_file_path resources/rhino/resources/contexts/raspberry-pi/smart_lighting_raspberry-pi.rhn

With a working microphone connected, say:

Porcupine, set the lights in the living room to purple.

Custom Wake Word & Context

You can create custom Porcupine wake word and Rhino context models using Picovoice Console.