Picovoice Wordmark
Start Building
Introduction
Introduction
AndroidC.NETFlutterlink to GoiOSJavaNVIDIA JetsonLinuxmacOSNodejsPythonRaspberry PiReactReact NativeRustWebWindows
AndroidC.NETFlutterlink to GoiOSJavaNodejsPythonReactReact NativeRustWeb
SummaryPicovoice LeopardAmazon TranscribeAzure Speech-to-TextGoogle ASRGoogle ASR (Enhanced)IBM Watson Speech-to-Text
FAQ
Introduction
AndroidC.NETFlutterlink to GoiOSJavaNVIDIA JetsonLinuxmacOSNodejsPythonRaspberry PiReactReact NativeRustWebWindows
AndroidC.NETFlutterlink to GoiOSJavaNodejsPythonReactReact NativeRustWeb
FAQ
Introduction
AndroidCiOSLinuxmacOSPythonWebWindows
AndroidCiOSPythonWeb
SummaryOctopus Speech-to-IndexGoogle Speech-to-TextMozilla DeepSpeech
FAQ
Introduction
AndroidAngularArduinoBeagleBoneCChrome.NETEdgeFirefoxFlutterlink to GoiOSJavaNVIDIA JetsonLinuxmacOSMicrocontrollerNodejsPythonRaspberry PiReactReact NativeRustSafariUnityVueWebWindows
AndroidAngularC.NETFlutterlink to GoiOSJavaMicrocontrollerNodejsPythonReactReact NativeRustUnityVueWeb
SummaryPorcupineSnowboyPocketSphinx
Wake Word TipsFAQ
Introduction
AndroidAngularBeagleBoneCChrome.NETEdgeFirefoxFlutterlink to GoiOSJavaNVIDIA JetsonLinuxmacOSNodejsPythonRaspberry PiReactReact NativeRustSafariUnityVueWebWindows
AndroidAngularC.NETFlutterlink to GoiOSJavaNodejsPythonReactReact NativeRustUnityVueWeb
SummaryPicovoice RhinoGoogle DialogflowAmazon LexIBM WatsonMicrosoft LUIS
Expression SyntaxFAQ
Introduction
AndroidBeagleboneCiOSNVIDIA JetsonLinuxmacOSPythonRaspberry PiRustWebWindows
AndroidCiOSPythonRustWeb
SummaryPicovoice CobraWebRTC VAD
FAQ
Introduction
AndroidCiOSNVIDIA JetsonLinuxmacOSPythonRaspberry PiWebWindows
AndroidCiOSPythonWeb
SummaryPicovoice KoalaMozilla RNNoise
Introduction
AndroidCiOSNVIDIA JetsonLinuxmacOSPythonRaspberry PiWebWindows
AndroidCPythoniOSWeb
Introduction
AndroidAngularArduinoBeagleBoneC.NETFlutterlink to GoiOSJavaNVIDIA JetsonMicrocontrollerNodejsPythonRaspberry PiReactReact NativeRustUnityVueWeb
AndroidAngularCMicrocontroller.NETFlutterlink to GoiOSJavaNodejsPythonReactReact NativeRustUnityVueWeb
Picovoice SDK - FAQ
IntroductionSTM32F407G-DISC1 (Arm Cortex-M4)STM32F411E-DISCO (Arm Cortex-M4)STM32F769I-DISCO (Arm Cortex-M7)IMXRT1050-EVKB (Arm Cortex-M7)
Introduction
AndroidC.NETFlutterlink to GoiOSNodejsPythonReact NativeRustUnityWeb
AndroidC.NETFlutterlink to GoiOSNodejsPythonReact NativeRustUnityWeb
FAQGlossary

Leopard Speech-to-Text
React Quick Start

Platforms

  • Chrome & Chromium-based browsers
  • Edge
  • Firefox
  • Safari

Requirements

  • Picovoice Account and AccessKey
  • Node.js 14+
  • React 17.0+
  • npm

Picovoice Account & AccessKey

Signup or Login to Picovoice Console to get your AccessKey. Make sure to keep your AccessKey secret.

Quick Start

Setup

  1. Install Node.js .

  2. Install the npm packages:

    • @picovoice/leopard-react
    • @picovoice/web-voice-processor
npm install @picovoice/leopard-react @picovoice/web-voice-processor

Usage

To initialize Leopard, you'll need a Leopard model file (.pv). Place the model file in the project's public directory or generate a base64 representation of the file using the built-in script:

npx pvbase64 -i ${LEOPARD_PARAMS_PATH} -o ${OUTPUT_FILE_PATH}

Create a leopardModel object with either of the methods above:

const LEOPARD_MODEL_BASE64 = /* Base64 representation of the `.pv` model file*/;
const leopardModel = {
publicPath: "${MODEL_FILE_PATH}",
// or
base64: LEOPARD_MODEL_BASE64
}

Import and call the useLeopard Hook, and initialize Leopard with your AccessKey and leopardModel:

import React, { useEffect } from "react";
import { useLeopard } from "@picovoice/leopard-react";
function VoiceWidget(props) {
const {
result,
isLoaded,
error,
init,
processFile,
startRecording,
stopRecording,
isRecording,
recordingElapsedSec,
release,
} = useLeopard();
const leopardModel = { publicPath: "${MODEL_FILE_PATH}" };
useEffect(() => {
init(
"${ACCESS_KEY}",
leopardModel
);
}, []);
useEffect(() => {
if (result !== null) {
// ... use transcript result
}
}, [result]);
// ... render component
}

To process audio, you can either upload it as a File object or record it directly. Once the audio has been processed, the transcript will be available in the result state variable.

File Object

Transcribe File objects directly using the processFile function:

<input
type="file"
accept="audio/*"
onChange={async (e) => {
if (!!e.target.files?.length) {
await processFile(e.target.files[0]);
}
}}
/>

Record Audio

Leopard React binding uses WebVoiceProcessor to record audio with a microphone. To start recording audio, call startRecording:

await startRecording();

Call stopRecording to stop recording audio and begin processing:

await stopRecording();

Once processing is complete, the transcript will be available via the result state variable.

Allocated resources are automatically freed on unmount, but can also be done explicitly:

await release();

Custom Models

Create custom models using the Picovoice Console . Train and download a Leopard speech-to-text model (.pv) for the target platform Web (WASM). This model file can be used directly with publicPath, but, if base64 is preferable, convert the .pv file to a base64 JavaScript variable using the built-in pvbase64 script:

npx pvbase64 -i ${MODEL_FILE}.pv -o ${MODEL_BASE64}.js -n ${MODEL_BASE64_VAR_NAME}

Model files (.pv) are saved in IndexedDB to be used by Web Assembly. Either base64 or publicPath must be set to instantiate Leopard. If both are set, Leopard will use the base64 model.

const leopardModel = {
publicPath: "${MODEL_FILE_PATH}",
// or
base64: "${MODEL_BASE64_STRING}",
}

Switching Languages

In order to use Leopard with other languages, you need to use the corresponding model file (.pv) for the desired language. The model files for all supported languages are available on the Leopard GitHub repository .

Demo

For the Leopard React SDK, there is a React demo project available on the Leopard GitHub repository .

Setup

Clone the Leopard repository from GitHub:

git clone --recurse-submodules https://github.com/Picovoice/leopard.git

Usage

  1. Install dependencies:
cd leopard/demo/react
npm install
  1. Run the demo with the start script with a language code to start a local web server hosting the demo in the language of your choice (e.g. de -> German, ko -> Korean). To see a list of available languages, run start without a language code.
npm run start ${LANGUAGE}
  1. Open http://localhost:3000 to view it in the browser.

  2. Enter your access key and press on Init Leopard. Once Leopard has loaded, upload an audio file or record audio with a microphone to begin transcribing speech-to-text.

Resources

Package

  • @picovoice/leopard-react on the npm registry

API

  • @picovoice/leopard-react API Docs

GitHub

  • Leopard React SDK on GitHub
  • Leopard React Demo on GitHub

Benchmark

  • Speech-to-Text Benchmark

Was this doc helpful?

Issue with this doc?

Report a GitHub Issue
Leopard Speech-to-Text React Quick Start
  • Platforms
  • Requirements
  • Picovoice Account & AccessKey
  • Quick Start
  • Setup
  • Usage
  • Custom Models
  • Switching Languages
  • Demo
  • Setup
  • Usage
  • Resources
Platform
  • Leopard Speech-to-Text
  • Cheetah Streaming Speech-to-Text
  • Koala Noise Suppression
  • Eagle Speaker RecognitionBETA
  • Octopus Speech-to-Index
  • Porcupine Wake Word
  • Rhino Speech-to-Intent
  • Cobra Voice Activity Detection
  • Orca Text-to-SpeechWAITLIST
  • Falcon Speaker DiarizationWAITLIST
Resources
  • Docs
  • Console
  • Blog
  • Use Cases
  • Playground
Sales & Services
  • Consulting
  • Developer Plan
  • Enterprise Plan
  • Support Add-on
Company
  • About us
  • Careers
Follow Picovoice
  • LinkedIn
  • GitHub
  • Twitter
  • Medium
  • YouTube
  • AngelList
Subscribe to our newsletter
Terms of Use
Privacy Policy
© 2019-2022 Picovoice Inc.