rustRhino - Rust API

  • Speech-to-Intent Engine
  • Domain Specific NLU
  • Offline NLU
  • Local Voice Recognition
  • Raspberry Pi
  • BeagleBone
  • NVIDIA Jetson
  • Linux
  • macOS
  • Windows
  • Rust

This document outlines how to integrate Rhino Speech-to-Intent engine within an application using its Go API.


  • Rust 1.54+
  • Cargo


  • Linux (x86_64)
  • macOS (x86_64)
  • Windows (x86_64)
  • Raspberry Pi (all variants)
  • NVIDIA Jetson (Nano)
  • BeagleBone.


You can install the latest version of Porcupine into your Rust crate by adding pv_rhino into your Cargo.toml:

pv_rhino = "*"


To create an instance of the engine you first create a RhinoBuilder instance with the configuration parameters for the speech to intent engine and then make a call to .init():

use rhino::RhinoBuilder;
let rhino: Rhino = RhinoBuilder::new("/path/to/context/file.rhn").init().expect("Unable to create Rhino");

The context file is a Speech-to-Intent context created either using Picovoice Console or one of the default contexts available on Rhino's GitHub repository.

The sensitivity of the engine can be tuned using the sensitivity parameter. It is a floating point number within [0, 1]. A higher sensitivity value results in fewer misses at the cost of (potentially) increasing the erroneous inference rate. You can also override the default Rhino model (.pv), which is needs to be done when using a non-English context:

let rhino: Rhino = RhinoBuilder::RhinoBuilder::new("/path/to/context/file.rhn")
.init().expect("Unable to create Rhino");

When initialized, you can start passing in frames of audio for processing. The sample rate that is required by the engine is given by sample_rate() and number of samples per frame is frame_length().

To feed audio into Rhino, use the Process function in your capture loop:

fn next_audio_frame() -> Vec<i16> {
// get audio frame
loop {
if let Ok(is_finalized) = rhino.process(&next_audio_frame()) {
if is_finalized {
if let Ok(inference) = rhino.get_inference() {
if inference.is_understood {
let intent = inference.intent.unwrap();
let slots = inference.slots;
// add code to take action based on inferred intent and slot values
} else {
// add code to handle unsupported commands

Custom Context

You can create custom Rhino context models using Picovoice Console.

Non-English Contexts

In order to run inference on non-English contexts you need to use the corresponding model file (.pv extension). The model files for all supported languages are available here.

Issue with this doc? Please let us know.