pythonPicovoice Platform — Python Quick Start

  • End-to-End Voice Platform
  • Offline Voice Recognition
  • Local Speech Recognition
  • Speech-to-Intent
  • Domain-Specific NLU
  • Wake Word Detection
  • Raspberry Pi
  • BeagleBone
  • Linux
  • macOS
  • Windows
  • Python

Requirements

  • Python3
  • PIP

Compatibility

  • Linux (x86_64)
  • macOS (x86_64)
  • Windows (x86_64)
  • Raspberry Pi (all variants)
  • BeagleBone.

Cloning the Repository

If using SSH, clone the repository with:

git clone --recurse-submodules [email protected]:Picovoice/picovoice.git

If using HTTPS, then type

git clone --recurse-submodules https://github.com/Picovoice/picovoice.git

Installation

Install PyAudio and then the demo package:

sudo pip3 install picovoicedemo

Check usage information:

picovoice_demo_mic --help

Demo Applications

There are two command-line demo applications shipped with the demo package: microphone demo and file demo. Microphone demo starts an audio stream from the microphone and monitors it for occurrences of a given wake word and then infers the user's intent from the follow-on voice command. The file demo performs the same operation on an already pre-recorded audio file.

Microphone Demo

The demo reads audio from microphone, processes it in real-time, and outputs into the terminal when a wake word is detected or the user's intent is inferred from a follow-on voice command. If you don't have custom Porcupine and Rhino models, you can use the pre-trained ones available in the repository. From the root of the cloned repository run the demo:

picovoice_demo_mic \
--keyword_path resources/porcupine/resources/keyword_files/${PLATFORM}/porcupine_${PLATFORM}.ppn \
--context_path resources/rhino/resources/contexts/${PLATFORM}/smart_lighting_${PLATFORM}.rhn

Replace ${PLATFORM} with one of the following:

  • linux
  • mac
  • windows
  • raspberry-pi
  • beaglebone

With a working microphone connected, say:

Porcupine, set the lights in the living room to purple.

You should see the following outputted in the terminal:

[Listening ...]
[wake word]
{
intent : 'changeColor'
slots : {
location : 'living room'
color : 'purple'
}
}

File Demo

The demo opens a pre-recorded audio file, processes it, and outputs into the terminal when a wake word is detected or the user's intent is inferred from a follow-on voice command.

picovoice_demo_file \
--input_audio_path ${INPUT_AUDIO_PATH} \
--keyword_path ${KEYWORD_PATH} \
--context_path ${CONTEXT_PATH}

Replace ${INPUT_AUDIO_PATH} with the path to an audio file containing utterances of desired wake phrase followed by spoken commands, ${KEYWORD_PATH} by absolute path to a Porcupine keyword file (.ppn), and ${CONTEXT_PATH} with the absolute path to a Rhino context file (.rhn).

Custom Wake Word & Context

You can create custom Porcupine wake word and Rhino context models using Picovoice Console.