As voice technology continues to evolve, voice commands is becoming foundational to human-computer interaction. Unlike traditional spoken language understanding systems that simply transcribe words, Speech-to-Intent infers meaning directly from utterances. By converting spoken language directly into structured intents and actions, it allows apps to interpret what the user wants, and perform actions based on that. This ability brings us closer to true conversational interfaces.
The term voice commands is often used interchangeably with other terms such as Speech-to-Intent, natural language understanding (NLU), and spoken language understanding (SLU).
Another major leap for voice technology is the ability to run on-device. Offline processing keeps interactions fast, private, and reliable without needing an internet connection. By keeping audio data on-device, apps protect user privacy, reduce latency, and avoid dependency on the cloud—making on-device voice commands ideal on mobile where connectivity often fluctuates.
For mobile developers, offline voice commands opens powerful possibilities. Users can naturally issue commands, ask questions, or trigger complex actions wherever they go. Because mobile devices are always moving, stable connectivity isn't guaranteed—making offline capability essential for consistent, dependable voice experiences.
This guide shows how to add custom, on-device voice commands to a React Native app with Rhino Speech-to-Intent.
What you'll learn:
- Setting up audio recording permissions for iOS and Android
- Adding custom
voice commandswith Rhino
What you need:
- React Native 0.62.2+
- Android 5.0+ (API 21+)
- iOS 13.0+
Configure Microphone Permissions
Before we begin—integrating voice commands requires recording audio, which means the user must grant permission. Set up your project to request and handle these permissions:
iOS Permissions
Add the following block to Info.plist:
Android Permissions
Add the following block to AndroidManifest.xml:
Internet is required only for licensing and usage tracking.
Before You Start: Add Wake Word Detection
If you haven't already, consider adding wake word detection first. This allows your app to "wake up" when a user says a simple phrase like "Hey AppName." Wake word detection and voice commands complement each other perfectly—you can design your app to listen for commands only when needed, creating a truly hands-free experience while saving compute resources.
Add Voice Commands with Rhino
- Install Rhino React Native Dependencies: To use Rhino Speech-to-Intent in your React Native project, install @picovoice/react-native-voice-processor and @picovoice/rhino-react-native:
Get Your Picovoice Access Key: Sign up for a free Picovoice Console account and obtain your
AccessKey. TheAccessKeyis only required for authentication and authorization.Train Custom Voice Commands: Create a custom context using the Picovoice Console. This is where you define the
voice commandsyou want to be able to handle. Refer to the syntax cheat sheet to learn how to build your context.
If you'd like to see a video walkthrough for creating a custom Rhino context, check out Picovoice Console Tutorial: Rhino Speech-to-Intent.
Add Context Files to Your Project: Once you've created your context, you will need to download two context files—one for Android, and one for iOS. The files will have a
.rhnextension. In the Android subproject, add the Android context file to theassetsfolder (${ANDROID_APP}/src/main/assets). In the iOS subproject, add the iOS context file to theCopy Bundle Resourcesstep. In the following code examples, this file will be referred to as${CONTEXT_FILE_PATH}.Using Non-English Model Files (Optional): The English model file is built into
Rhino, so if your custom context is in English, you can skip this step. Otherwise, you will need the corresponding Rhino model file. Similar to step 4, add this file to theassetsfolder for Android, andCopy Bundle Resourcesstep for iOS. In the following code examples, this file will be referred to as${MODEL_FILE_PATH}.
Step-by-Step Code Implementation
First, we'll break down the key components of the Rhino React Native SDK. At the end is a fully implemented component that you can simply drop into your project for a quick proof of concept.
- Define a function to handle the intent that will be returned by
RhinoManager(next step). The intent will be of class RhinoInference. Also define an error handler callback.
- Create an instance of
RhinoManager. Be sure to replace the placeholders, and pass in yourinferenceCallback()andprocessErrorCallback()functions.
- To start listening for
voice commands, callprocess(). WhenRhinoManagerdetects that something was said, it will automatically stop processing, whether it understood the command or not. To start listening again for the next command, callprocess()again.
- Once you no longer need
RhinoManager, be sure to release the resources acquired by callingdelete().
Complete Code Implementation
Below is a fully implemented component you can copy into your project to see Rhino in action. Be sure to replace the placeholders ${ACCESS_KEY}, ${CONTEXT_FILE_PATH}, and ${MODEL_FILE_PATH} (if applicable).
To see a complete working project, check out Rhino's React Native demo on GitHub. For more in-depth information, refer to the Rhino React Native SDK quick start guide.
Start Free






