Picovoice WordmarkPicovoice Console
Introduction
Introduction
AndroidC.NETFlutterlink to GoiOSJavaNvidia JetsonLinuxmacOSNodejsPythonRaspberry PiReact NativeRustWebWindows
AndroidC.NETFlutterlink to GoiOSJavaNodejsPythonReact NativeRustWeb
SummaryPicovoice LeopardAmazon TranscribeAzure Speech-to-TextGoogle ASRGoogle ASR (Enhanced)IBM Watson Speech-to-Text
FAQ
Introduction
AndroidC.NETFlutterlink to GoiOSJavaNodejsPythonReact NativeRustWeb
AndroidC.NETFlutterlink to GoiOSJavaNodejsPythonReact NativeRustWeb
FAQ
Introduction
AndroidCiOSLinuxmacOSPythonWebWindows
AndroidCiOSPythonWeb
SummaryOctopus Speech-to-IndexGoogle Speech-to-TextMozilla DeepSpeech
FAQ
Introduction
AndroidAngularArduinoBeagleBoneCChrome.NETEdgeFirefoxFlutterlink to GoiOSJavaNvidia JetsonLinuxmacOSMicrocontrollerNodejsPythonRaspberry PiReactReact NativeRustSafariUnityVueWebWindows
AndroidAngularC.NETFlutterlink to GoiOSJavaMicrocontrollerNodejsPythonReactReact NativeRustUnityVueWeb
SummaryPorcupineSnowboyPocketSphinx
Wake Word TipsFAQ
Introduction
AndroidAngularBeagleBoneCChrome.NETEdgeFirefoxFlutterlink to GoiOSJavaNvidia JetsonlinuxmacOSNodejsPythonRaspberry PiReactReact NativeRustSafariUnityVueWebWindows
AndroidAngularC.NETFlutterlink to GoiOSJavaNodejsPythonReactReact NativeRustUnityVueWeb
SummaryPicovoice RhinoGoogle DialogflowAmazon LexIBM WatsonMicrosoft LUIS
Expression SyntaxFAQ
Introduction
AndroidBeagleboneCiOSNvidia JetsonLinuxmacOSPythonRaspberry PiRustWebWindows
AndroidCiOSPythonRustWeb
SummaryPicovoice CobraWebRTC VAD
FAQ
Introduction
AndroidAngularArduinoBeagleBoneC.NETFlutterlink to GoiOSJavaNvidia JetsonMicrocontrollerNodejsPythonRaspberry PiReactReact NativeRustUnityVueWeb
AndroidAngularCMicrocontroller.NETFlutterlink to GoiOSJavaNodejsPythonReactReact NativeRustUnityVueWeb
Picovoice SDK - FAQ
IntroductionSTM32F407G-DISC1 (Arm Cortex-M4)STM32F411E-DISCO (Arm Cortex-M4)STM32F769I-DISCO (Arm Cortex-M7)IMXRT1050-EVKB (Arm Cortex-M7)
FAQGlossary

Rhino Speech-to-Intent
Unity Quick Start


Platforms

  • Linux (x86_64)
  • macOS (x86_64, arm64)
  • Windows (x86_64)
  • Android 4.4+ (API 19+) (ARM only)
  • iOS 9.0+

Requirements

  • Picovoice Account & AccessKey
  • Unity 2017.4+ (Unity 2021.2+ for macOS arm64)
  • Unity Build Support modules for desired platforms

Picovoice Account & AccessKey

Signup or Login to Picovoice Console to get your AccessKey. Make sure to keep your AccessKey secret.

Quick Start

Setup

  1. Download and install Unity.
  2. Download and import the latest Rhino Unity package.

NOTE: For running Rhino on macOS arm64, use the rhino-*-Apple-silicon.unitypackage version with Unity 2021.2+.

Usage

Create an instance of Rhino using either a pre-built context file (.rhn) from the Rhino GitHub Repository or a custom context created with the Picovoice Console:

using Pv.Unity;
void inferenceCallback(Inference inference)
{
if(inference.IsUnderstood)
{
string intent = inference.Intent;
Dictionary<string, string> slots = inference.Slots;
// take action based on inferred intent and slot values
}
else
{
// handle unsupported commands
}
}
RhinoManager rhinoManager = RhinoManager.Create(
"${ACCESS_KEY}",
"${CONTEXT_FILE_PATH}", // path is relative to the `StreamingAssets` folder
inferenceCallback);

Start audio capture and intent inference with:

rhinoManager.Process();

Once an inference has been made, the inferenceCallback will be invoked and audio capture will stop automatically.

Release resources acquired by RhinoManager with:

rhinoManager.Delete();
For use-cases where an audio capture pipeline is not required, there's the Low-Level Rhino API.

Custom Contexts

Create custom contexts using the Picovoice Console. Download the custom context file (.rhn) and place it in the StreamingAssets folder of the Unity project. Pass the relative path (from StreamingAssets) to the RhinoManager.Create constructor.

Non-English Languages

Use the corresponding model file (.pv) to make inferences in non-English contexts. The model files for all supported languages are available on the Rhino GitHub repository. Pass in the model file using the modelPath input argument to change the language:

RhinoManager rhinoManager = RhinoManager.Create(
"${ACCESS_KEY}",
"${CONTEXT_FILE_PATH}",
inferenceCallback,
modelPath: "${MODEL_FILE_PATH}");

Demo

For the Rhino Unity SDK, we offer demo applications that demonstrate how to use the Speech-to-Intent engine on real-time audio streams (i.e. microphone input).

Setup

Download and import the latest Rhino Unity package.

Usage

  1. Open the Rhino Demo Scene (Rhino/RhinoDemo/RhinoDemo.unity).
  2. Copy AccessKey from Picovoice Console into the ACCESS_KEY variable in RhinoDemo.cs
  3. Play the scene in the editor or go to File > Build Settings and click the Build and Run button to compile and run the scene for the selected platform.

Resources

Package

  • Pv.Unity.Rhino on GitHub

API

  • Pv.Unity.Rhino API Docs

GitHub

  • Rhino Unity SDK on GitHub
  • Rhino Unity demos on GitHub

Benchmark

  • Speech-to-Intent Benchmark

Further Reading

  • Tutorial: Making a Hands-Free Video Player in Unity

Video

  • Voice-Controlled VR Video Player Made in Unity

Was this doc helpful?

Issue with this doc?

Report a GitHub Issue
Rhino Speech-to-Intent Unity Quick Start
  • Platforms
  • Requirements
  • Picovoice Account & AccessKey
  • Quick Start
  • Setup
  • Usage
  • Custom Contexts
  • Non-English Languages
  • Demo
  • Setup
  • Usage
  • Resources
Platform
  • Leopard Speech-to-Text
  • Cheetah Streaming Speech-to-Text
  • Octopus Speech-to-Index
  • Porcupine Wake Word
  • Rhino Speech-to-Intent
  • Cobra Voice Activity Detection
Resources
  • Docs
  • Console
  • Blog
  • Demos
Sales
  • Pricing
  • Starter Tier
  • Enterprise
Company
  • Careers
Follow Picovoice
  • LinkedIn
  • GitHub
  • Twitter
  • Medium
  • YouTube
  • AngelList
Subscribe to our newsletter
Terms of Use
Privacy Policy
© 2019-2022 Picovoice Inc.