Picovoice Wordmark
Start Building
Introduction
Introduction
AndroidC.NETiOSLinuxmacOSNode.jsPythonRaspberry PiWebWindows
AndroidC.NETiOSNode.jsPythonWeb
SummaryPicovoice picoLLMGPTQ
Introduction
AndroidC.NETFlutteriOSJavaLinuxmacOSNode.jsPythonRaspberry PiReactReact NativeRustWebWindows
AndroidC.NETFlutteriOSJavaNode.jsPythonReactReact NativeRustWeb
SummaryPicovoice LeopardAmazon TranscribeAzure Speech-to-TextGoogle ASRGoogle ASR (Enhanced)IBM Watson Speech-to-TextWhisper Speech-to-Text
FAQ
Introduction
AndroidC.NETFlutteriOSJavaLinuxmacOSNode.jsPythonRaspberry PiReactReact NativeRustWebWindows
AndroidC.NETFlutteriOSJavaNode.jsPythonReactReact NativeRustWeb
SummaryPicovoice Cheetah
FAQ
Introduction
AndroidC.NETiOSLinuxmacOSNode.jsPythonRaspberry PiWebWindows
AndroidC.NETiOSNode.jsPythonWeb
SummaryAmazon PollyAzure TTSElevenLabsOpenAI TTSPicovoice Orca
Introduction
AndroidCiOSLinuxmacOSPythonRaspberry PiWebWindows
AndroidCiOSPythonWeb
SummaryPicovoice KoalaMozilla RNNoise
Introduction
AndroidCiOSLinuxmacOSNode.jsPythonRaspberry PiWebWindows
AndroidCNode.jsPythoniOSWeb
SummaryPicovoice EaglepyannoteSpeechBrainWeSpeaker
Introduction
AndroidCiOSLinuxmacOSPythonRaspberry PiWebWindows
AndroidCiOSPythonWeb
SummaryPicovoice FalconAmazon TranscribeAzure Speech-to-TextGoogle Speech-to-Textpyannote
Introduction
AndroidArduinoCChrome.NETEdgeFirefoxFlutteriOSJavaLinuxmacOSMicrocontrollerNode.jsPythonRaspberry PiReactReact NativeRustSafariUnityWebWindows
AndroidC.NETFlutteriOSJavaMicrocontrollerNode.jsPythonReactReact NativeRustUnityWeb
SummaryPorcupineSnowboyPocketSphinx
Wake Word TipsFAQ
Introduction
AndroidCChrome.NETEdgeFirefoxFlutteriOSJavaLinuxmacOSNode.jsPythonRaspberry PiReactReact NativeRustSafariUnityWebWindows
AndroidC.NETFlutteriOSJavaNode.jsPythonReactReact NativeRustUnityWeb
SummaryPicovoice RhinoGoogle DialogflowAmazon LexIBM WatsonMicrosoft LUIS
Expression SyntaxFAQ
Introduction
AndroidC.NETiOSLinuxmacOSNode.jsPythonRaspberry PiRustWebWindows
AndroidC.NETiOSNode.jsPythonRustWeb
SummaryPicovoice CobraWebRTC VAD
FAQ
Introduction
AndroidC.NETFlutteriOSNode.jsPythonReact NativeRustUnityWeb
AndroidC.NETFlutteriOSNode.jsPythonReact NativeRustUnityWeb
Introduction
C.NETNode.jsPython
C.NETNode.jsPython
FAQGlossary

Rhino Speech-to-Intent
React Quick Start

Platforms

  • Chrome & Chromium-based browsers
  • Edge
  • Firefox
  • Safari

Requirements

  • Picovoice Account and AccessKey
  • Node.js 16+
  • React 17+
  • npm

Picovoice Account & AccessKey

Signup or login to Picovoice Console to get your AccessKey. Make sure to keep your AccessKey secret.

Quick Start

Setup

  1. Install Node.js.

  2. Install the npm packages:

    • @picovoice/rhino-react
    • @picovoice/web-voice-processor
npm install @picovoice/rhino-react @picovoice/web-voice-processor

Usage

To initialize Rhino Speech-to-Intent, you'll need a context file (.rhn) as well as a model file (.pv). Place these files in the project's public directory or generate base64 representations of the files using the built-in script:

npx pvbase64 -i ${RHINO_PARAMS_PATH} -o ${OUTPUT_FILE_PATH}

Create a rhinoContext and a rhinoModel object with either of the methods above:

const RHINO_CONTEXT_BASE64 = /* Base64 representation of the `.rhn` context file*/;
const rhinoContext = {
publicPath: "${CONTEXT_FILE_PATH}",
// or
base64: RHINO_CONTEXT_BASE64,
}
const RHINO_MODEL_BASE64 = /* Base64 representation of the `.pv` model file*/;
const rhinoModel = {
publicPath: "${MODEL_FILE_PATH}",
// or
base64: RHINO_MODEL_BASE64,
}

Import and call the useRhino Hook, and initialize Rhino Speech-to-Intent with your AccessKey, rhinoContext, and rhinoModel:

import React, { useEffect } from 'react';
import { useRhino } from '@picovoice/rhino-react';
function VoiceWidget(props) {
const {
inference,
contextInfo,
isLoaded,
isListening,
error,
init,
process,
release,
} = useRhino();
const rhinoContext = { publicPath: "${CONTEXT_FILE_PATH}" }
const rhinoModel = { publicPath: "${MODEL_FILE_PATH}" }
useEffect(() => {
init(
"${ACCESS_KEY}",
rhinoContext,
rhinoModel
);
}, [])
useEffect(() => {
if (inference !== null) {
// ... use inference detection result
}
}, [inference])
// ... render component
}

To start detecting for an inference, run the process function:

await process();

The process function initializes WebVoiceProcessor. Rhino Speech-to-Intent will then listen and process frames of microphone audio until it reaches a conclusion, then return the result via the inference variable. Once a conclusion is reached, Rhino will enter a paused state. From the paused state, process will need to be called again to detect another inference.

Allocated resources are automatically freed on unmount, but can also be done explicitly:

await release();

Custom Contexts

Create custom contexts in the Picovoice Console using the Rhino Speech-to-Intent Grammar. Train and download a Rhino context file (.rhn) for the target platform Web (WASM). This model file can be used directly with publicPath, but, if base64 is preferable, convert the .rhn file to a base64 JavaScript variable using the built-in pvbase64 script:

npx pvbase64 -i ${CONTEXT_FILE}.rhn -o ${CONTEXT_BASE64}.js -n ${CONTEXT_BASE64_VAR_NAME}

Similar to the model file (.pv), context files (.rhn) are saved in IndexedDB to be used by Web Assembly. Either base64 or publicPath must be set for the context to instantiate Rhino. If both are set, Rhino Speech-to-Intent will use the base64 model.

const contextModel = {
publicPath: "${CONTEXT_FILE_PATH}",
// or
base64: "${CONTEXT_BASE64_STRING}",
}

Non-English Languages

In order to use Rhino Speech-to-Intent with different languages you need to use the corresponding model file (.pv) for the desired language. The model files for all supported languages are available in the Rhino GitHub repository.

Demo

For the Rhino Speech-to-Intent React SDK, there is a React demo project available on the Rhino GitHub repository.

Setup

Clone the Rhino Speech-to-Intent repository from GitHub:

git clone --recurse-submodules https://github.com/Picovoice/rhino.git

Usage

  1. Install dependencies:
cd rhino/demo/react
npm install
  1. Run the demo with the start script with a language code to start a local web server hosting the demo in the language of your choice (e.g. de -> German, ko -> Korean). To see a list of available languages, run start without a language code.
npm run start ${LANGUAGE}
  1. Open http://localhost:3000 to view it in the browser.

  2. Enter your access key and press on Init Rhino to start the demo.

Resources

Packages

  • @picovoice/rhino-react on the npm registry
  • @picovoice/web-voice-processor on the npm registry

API

  • @picovoice/rhino-react API Docs

GitHub

  • Rhino Speech-to-Intent React SDK on GitHub
  • Rhino Speech-to-Intent React Demo on GitHub

Benchmark

  • Speech-to-Intent Benchmark

Further Reading

  • Cross-Browser Voice Commands with React

Was this doc helpful?

Issue with this doc?

Report a GitHub Issue
Rhino Speech-to-Intent React Quick Start
  • Platforms
  • Requirements
  • Picovoice Account & AccessKey
  • Quick Start
  • Setup
  • Usage
  • Custom Contexts
  • Non-English Languages
  • Demo
  • Setup
  • Usage
  • Resources
Voice AI
  • Leopard Speech-to-Text
  • Cheetah Streaming Speech-to-Text
  • Orca Text-to-Speech
  • Koala Noise Suppression
  • Eagle Speaker Recognition
  • Falcon Speaker Diarization
  • Porcupine Wake Word
  • Rhino Speech-to-Intent
  • Cobra Voice Activity Detection
Local LLM
  • picoLLM Inference
  • picoLLM Compression
  • picoLLM GYM
Resources
  • Docs
  • Console
  • Blog
  • Use Cases
  • Playground
Sales & Services
  • Consulting
  • Foundation Plan
  • Enterprise Plan
  • Enterprise Support
Company
  • About us
  • Careers
Follow Picovoice
  • LinkedIn
  • GitHub
  • X
  • YouTube
  • AngelList
Subscribe to our newsletter
Terms of Use
Privacy Policy
© 2019-2025 Picovoice Inc.