pythonRhino - Python API

  • Speech-to-Intent Engine
  • Domain Specific NLU
  • Offline NLU
  • Local Voice Recognition
  • Raspberry Pi
  • Beaglebone
  • NVIDIA Jetson
  • ARM Linux
  • Linux
  • macOS
  • Windows
  • Python

This document outlines how to integrate Rhino Speech-to-Intent engine within an application using its Python API.


  • Python 3
  • PIP


  • Linux (x86_64)
  • macOS (x86_64)
  • Windows (x86_64)
  • Raspberry Pi (all variants)
  • BeagleBone
  • NVIDIA Jetson (Nano)


pip3 install pvrhino


Create an instance of the engine:

import pvrhino
handle = pvrhino.create(context_path='/absolute/path/to/context')

Where context_path is the absolute path to Speech-to-Intent context created either using Picovoice Console or one of the default contexts available on Rhino's GitHub repository.

The sensitivity of the engine can be tuned using the sensitivity parameter. It is a floating point number within [0, 1]. A higher sensitivity value results in fewer misses at the cost of (potentially) increasing the erroneous inference rate.

import pvrhino
handle = pvrhino.create(context_path='/absolute/path/to/context', sensitivity=0.25)

When initialized, the valid sample rate is given by handle.sample_rate. Expected frame length (number of audio samples in an input array) is handle.frame_length. The engine accepts 16-bit linearly-encoded PCM and operates on single-channel audio.

def get_next_audio_frame():
while True:
is_finalized = rhino.process(get_next_audio_frame())
if is_finalized:
inference = rhino.get_inference()
if not inference.is_understood:
# add code to handle unsupported commands
intent = inference.intent
slots = inference.slots
# add code to take action based on inferred intent and slot values

When done resources have to be released explicitly:


Custom Context

You can create custom Rhino context models using Picovoice Console.

Non-English Contexts

In order to run inference on non-English contexts you need to use the corresponding model file. The model files for all supported languages are available here.

Issue with this doc? Please let us know.