react-nativePicovoice Platform — React Native API

  • Wake Word Detection
  • Local Voice Commands
  • Offline Keyword Spotting
  • Always Listening
  • Voice Activation
  • React Native
  • JavaScript
  • Mobile
  • Android
  • iOS

Requirements

  • Node
  • yarn or npm
  • CocoaPods
  • Android SDK 16+
  • JDK 8+
  • Xcode 9+

Refer to React Native's environment setup page for specifics on how to prepare your machine for React Native development.

Compatibility

  • React Native 0.62.2+
  • Android 4.1+ (API 16+)
  • iOS 9.0+

Installation

To install Picovoice into you React Native project add the following native modules to your project:

yarn add @picovoice/react-native-voice-processor
yarn add @picovoice/porcupine-react-native
yarn add @picovoice/rhino-react-native
yarn add @picovoice/picovoice-react-native

or

npm i @picovoice/react-native-voice-processor
npm i @picovoice/porcupine-react-native
npm i @picovoice/rhino-react-native
npm i @picovoice/picovoice-react-native

Link the iOS package:

cd ios && pod install && cd ..

NOTE: Due to a limitation in React Native CLI autolinking, these native modules cannot be included as transitive dependencies. If you are creating a module that depends on these packages you will have to list these as peer dependencies and require developers to install them alongside.

Permissions

To enable recording with the hardware's microphone, you must first ensure that you have enabled the proper permission on both iOS and Android.

On iOS, open your Info.plist and add the following line:

<key>NSMicrophoneUsageDescription</key>
<string>[Permission explanation]</string>

On Android, open your AndroidManifest.xml and add the following line:

<uses-permission android:name="android.permission.RECORD_AUDIO" />

Finally, in your app JS code, be sure to check for user permission consent before proceeding with audio capture:

let recordAudioRequest;
if (Platform.OS == 'android') {
// For Android, we need to explicitly ask
recordAudioRequest = this._requestRecordAudioPermission();
} else {
// iOS automatically asks for permission
recordAudioRequest = new Promise(function (resolve, _) {
resolve(true);
});
}
recordAudioRequest.then((hasPermission) => {
if(hasPermission){
// Code that uses Picovoice
}
});
async _requestRecordAudioPermission() {
const granted = await PermissionsAndroid.request(
PermissionsAndroid.PERMISSIONS.RECORD_AUDIO,
{
title: 'Microphone Permission',
message: '[Permission explanation]',
buttonNeutral: 'Ask Me Later',
buttonNegative: 'Cancel',
buttonPositive: 'OK',
}
);
return (granted === PermissionsAndroid.RESULTS.GRANTED)
}

Usage

The module provides you with two levels of API to choose from depending on your needs.

High-Level API

PicovoiceManager provides a high-level API that takes care of audio recording. This class is the quickest way to get started.

The static constructor PicovoiceManager.create will create an instance of a PicovoiceManager using a Porcupine keyword file and Rhino context file that you pass to it.

this._picovoiceManager = PicovoiceManager.create(
'/path/to/keyword/file.ppn',
wakeWordCallback,
'/path/to/context/file.rhn',
inferenceCallback);

The wakeWordCallback and inferenceCallback parameters are functions that you want to execute when a wake word is detected and when an inference is made.

wakeWordCallback(){
// wake word detected!
}
inferenceCallback(inference){
// `inference` is a JSON object with the following fields:
// (1) isUnderstood
// (2) intent
// (3) slots
}

You can override the default model files and sensitivities:

let porcupineSensitivity = 0.7
let rhinoSensitivity = 0.6
this._picovoiceManager = PicovoiceManager.create(
'/path/to/keyword/file.ppn',
wakeWordCallback,
'/path/to/context/file.rhn',
inferenceCallback,
porcupineSensitivity,
rhinoSensitivity,
"/path/to/porcupine/model.pv",
"/path/to/rhino/model.pv");

Once you have instantiated a PicovoiceManager, you can start audio capture and processing by calling:

try {
let didStart = await this._picovoiceManager.start();
} catch (e) { }

And then stop it by calling:

let didStop = await this._picovoiceManager.stop();

There is no need to deal with audio capture to enable intent inference with PicovoiceManager because it uses our @picovoice/react-native-voice-processor module to capture frames of audio and automatically pass it to Picovoice.

Low-Level API

Picovoice provides low-level access to the Picovoice platform for those who want to incorporate it into a already existing audio processing pipeline.

Picovoice is created by passing a a Porcupine keyword file and Rhino context file to the create static constructor. Sensitivity and model files are optional.

async createPicovoice(){
let porcupineSensitivity = 0.7
let rhinoSensitivity = 0.6
try{
this._picovoice = await Picovoice.create(
'/path/to/keyword/file.ppn',
wakeWordCallback,
'/path/to/context/file.rhn',
inferenceCallback,
porcupineSensitivity,
rhinoSensitivity,
"/path/to/porcupine/model.pv",
"/path/to/rhino/model.pv")
} catch (err) {
// handle error
}
}
wakeWordCallback(){
// wake word detected!
}
inferenceCallback(inference){
// `inference` is a JSON object with the following fields:
// (1) isUnderstood
// (2) intent
// (3) slots
}

To use Picovoice, just pass frames of audio to the process function. The callbacks will automatically trigger when the wake word is detected and then when the follow-on command is detected.

let buffer = getAudioFrame();
try {
await this._picovoice.process(buffer);
} catch (e) {
// handle error
}

For process to work correctly, the audio data must be in the audio format required by Picovoice. The required audio format is found by calling .sampleRate to get the required sample rate and .frameLength to get the required frame size. Audio must be single-channel and 16-bit linearly-encoded.

Finally, once you no longer need the Picovoice, be sure to explicitly release the resources allocated to it:

this._picovoice.delete();

Custom Wake Word & Context

You can create custom Porcupine wake word and Rhino context models using Picovoice Console

Custom Model Integration

To add custom models to your React Native application you'll need to add the models to your platform projects. Android models must be added to ./android/app/src/main/res/raw/, while iOS models can be added anywhere under ./ios, but must be included as a bundled resource in your iOS project. Then in your app code, using the react-native-fs package, retrieve the files like so:

const RNFS = require('react-native-fs');
let wakeWordName = 'keyword';
let wakeWordFilename = wakeWordName;
let wakeWordPath = '';
let contextName = 'context';
let contextFilename = contextName;
let contextPath = '';
if (Platform.OS == 'android') {
// for Android, extract resources from APK
wakeWordFilename += '_android.ppn';
wakeWordPath = `${RNFS.DocumentDirectoryPath}/${wakeWordFilename}`;
await RNFS.copyFileRes(wakeWordFilename, wakeWordPath);
contextFilename += '_android.rhn';
contextPath = `${RNFS.DocumentDirectoryPath}/${contextFilename}`;
await RNFS.copyFileRes(contextFilename, contextPath);
} else if (Platform.OS == 'ios') {
wakeWordFilename += '_ios.ppn';
wakeWordPath = `${RNFS.MainBundlePath}/${wakeWordFilename}`;
contextFilename += '_ios.rhn';
contextPath = `${RNFS.MainBundlePath}/${contextFilename}`;
}

Non-English Models

In order to detect wake words and run inference in other languages you need to use the corresponding model file. The model files for all supported languages are available here and here.


Issue with this doc? Please let us know.