pythonPicovoice Platform — Python Quick Start

  • End-to-End Voice Platform
  • Offline Voice Recognition
  • Local Speech Recognition
  • Speech-to-Intent
  • Domain-Specific NLU
  • Wake Word Detection
  • Raspberry Pi
  • BeagleBone
  • Nvidia Jetson
  • Linux
  • macOS
  • Windows
  • Python


  • Python 3
  • PIP


  • Linux (x86_64)
  • macOS (x86_64)
  • Windows (x86_64)
  • Raspberry Pi (all variants)
  • Nvidia Jetson (Nano)
  • BeagleBone

Cloning the Picovoice Repository

If using SSH, clone the repository with:

git clone --recurse-submodules [email protected]:Picovoice/picovoice.git

If using HTTPS, then type:

git clone --recurse-submodules


Install PyAudio and then the demo package:

sudo pip3 install picovoicedemo

Check usage information:

picovoice_demo_mic --help

Demo Applications

There are two command-line demo applications shipped with the demo package: a microphone demo and a file demo. The microphone demo starts an audio stream from the microphone and monitors it for occurrences of a given wake word and then infers the user's intent from the follow-on voice command. The file demo performs the same operation on a pre-recorded audio file.

Microphone Demo

The demo reads audio from the microphone, processes it in real-time, and outputs to the terminal when a wake word is detected or the user's intent is inferred from a follow-on voice command. If you do not have custom Porcupine and Rhino models, you can use the pre-trained ones available in the repository. From the root of the cloned repository, run the demo:

picovoice_demo_mic \
--keyword_path resources/porcupine/resources/keyword_files/${PLATFORM}/porcupine_${PLATFORM}.ppn \
--context_path resources/rhino/resources/contexts/${PLATFORM}/smart_lighting_${PLATFORM}.rhn

Replace ${PLATFORM} with one of the following that matches your system:

  • linux
  • mac
  • windows
  • raspberry-pi
  • beaglebone
  • jetson

With a working microphone connected, say:

Porcupine, set the lights in the living room to purple.

You should see the following outputted in the terminal:

[Listening ...]
[wake word]
intent : 'changeColor'
slots : {
location : 'living room'
color : 'purple'

File Demo

The demo opens a pre-recorded audio file, processes it, and outputs into the terminal when a wake word is detected or the user's intent is inferred from a follow-on voice command.

picovoice_demo_file \
--input_audio_path ${INPUT_AUDIO_PATH} \
--keyword_path ${KEYWORD_PATH} \
--context_path ${CONTEXT_PATH}

Replace ${INPUT_AUDIO_PATH} with the path to an audio file containing utterances of desired wake phrase followed by spoken commands, ${KEYWORD_PATH} by the absolute path to a Porcupine keyword file (.ppn), and ${CONTEXT_PATH} with the absolute path to a Rhino context file (.rhn).

Create Custom Wake Words & Contexts

You can create custom Porcupine wake word and Rhino context models using Picovoice Console.

Issue with this doc? Please let us know.