nvidiaPorcupine - NVIDIA Jetson Quick Start

  • Wake Word Detection
  • Hotword Detection
  • Voice Commands
  • Always Listening
  • NVIDIA Jetson
  • Python
  • C

Requirements

  • Python 3

Compatibility

  • NVIDIA Jetson nano

Microphone Setup

Connect the microphone and get the list of available input audio devices:

arecord -L

The output will be similar to below

null
    Discard all samples (playback) or generate zero samples (capture)
default
mic
sysdefault:CARD=Device
    USB PnP Sound Device, USB Audio
    Default Audio Device
hw:CARD=Device,DEV=0
    USB PnP Sound Device, USB Audio
    Direct hardware device without any conversions
plughw:CARD=Device,DEV=0
    USB PnP Sound Device, USB Audio
    Hardware device with all software conversions

In this case, we pick plughw:CARD=Device,DEV=0. Note that this device comes with software conversions which are handy for resampling. In what follows we note this value as ${INPUT_AUDIO_DEVICE}.

create ~/.asoundrc

pcm.!default {
type asym
capture.pcm "mic"
}
pcm.mic {
type plug
slave {
pcm "${INPUT_AUDIO_DEVICE}"
}
}

If you have a speaker add a section for that to ~/.asoundrc as well. You may need to reboot the system for these settings to take effect.

Check if the microphone works properly by recording audio into a file:

arecord --format=S16_LE --duration=5 --rate=16000 --file-type=wav ~/test.wav

If the command above executes without any errors, then the microphone is functioning as expected. We recommend inspecting the recorded file for recording side effects such as clipping.

Installation

The core of the wake word engine is shipped as a pre-compiled ANSI C library. Hence, it can be used within a C/C++ application directory or in a high-level language such as Python via its bindings.

Python

Install PyAudio, Portable audio I/O, and FFI library packages:

sudo apt-get install python3-pyaudio portaudio19-dev libffi-dev

Then install the PIP package:

sudo pip3 install pvporcupinedemo

The demo package will be installed in /user/local/bin, which should be in the PATH variable to run the demo from anywhere within a terminal using the following command:

pvporcupine_mic --keywords porcupine

The engine starts processing the audio input from the microphone in realtime and outputs to the terminal when it detects utterances of wake-word "porcupine". You can learn about the capabilities of pvporcupine_mic by running pvporcupine_mic --help from the terminal.

C

Install ALSA development library

sudo apt-get install libasound-dev

Clone the repository

git clone https://github.com/Picovoice/porcupine.git

Go to the downloaded folder and compile the C demo application

gcc -O3 -o demo/c/porcupine_demo_mic -I include/ demo/c/porcupine_demo_mic.c -ldl -lasound -std=c99

Run the demo

demo/c/porcupine_demo_mic lib/jetson/cortex-a57-aarch64/libpv_porcupine.so lib/common/porcupine_params.pv \
resources/keyword_files/jetson/porcupine_jetson.ppn 0.5 ${INPUT_AUDIO_DEVICE}

The engine starts processing the audio input from the microphone in realtime and outputs to the terminal when it detects utterances of wake-word "porcupine".

Custom Wake Word

You can create custom Porcupine wake word models using Picovoice Console.


Issue with this doc? Please let us know.