Picovoice WordmarkPicovoice Console
Introduction
Introduction
AndroidC.NETFlutterlink to GoiOSJavaNvidia JetsonLinuxmacOSNodejsPythonRaspberry PiReact NativeRustWebWindows
AndroidC.NETFlutterlink to GoiOSJavaNodejsPythonReact NativeRustWeb
SummaryPicovoice LeopardAmazon TranscribeAzure Speech-to-TextGoogle ASRGoogle ASR (Enhanced)IBM Watson Speech-to-Text
FAQ
Introduction
AndroidC.NETFlutterlink to GoiOSJavaNodejsPythonReact NativeRustWeb
AndroidC.NETFlutterlink to GoiOSJavaNodejsPythonReact NativeRustWeb
FAQ
Introduction
AndroidCiOSLinuxmacOSPythonWebWindows
AndroidCiOSPythonWeb
SummaryOctopus Speech-to-IndexGoogle Speech-to-TextMozilla DeepSpeech
FAQ
Introduction
AndroidAngularArduinoBeagleBoneCChrome.NETEdgeFirefoxFlutterlink to GoiOSJavaNvidia JetsonLinuxmacOSMicrocontrollerNodejsPythonRaspberry PiReactReact NativeRustSafariUnityVueWebWindows
AndroidAngularC.NETFlutterlink to GoiOSJavaMicrocontrollerNodejsPythonReactReact NativeRustUnityVueWeb
SummaryPorcupineSnowboyPocketSphinx
Wake Word TipsFAQ
Introduction
AndroidAngularBeagleBoneCChrome.NETEdgeFirefoxFlutterlink to GoiOSJavaNvidia JetsonlinuxmacOSNodejsPythonRaspberry PiReactReact NativeRustSafariUnityVueWebWindows
AndroidAngularC.NETFlutterlink to GoiOSJavaNodejsPythonReactReact NativeRustUnityVueWeb
SummaryPicovoice RhinoGoogle DialogflowAmazon LexIBM WatsonMicrosoft LUIS
Expression SyntaxFAQ
Introduction
AndroidBeagleboneCiOSNvidia JetsonLinuxmacOSPythonRaspberry PiRustWebWindows
AndroidCiOSPythonRustWeb
SummaryPicovoice CobraWebRTC VAD
FAQ
Introduction
AndroidAngularArduinoBeagleBoneC.NETFlutterlink to GoiOSJavaNvidia JetsonMicrocontrollerNodejsPythonRaspberry PiReactReact NativeRustUnityVueWeb
AndroidAngularCMicrocontroller.NETFlutterlink to GoiOSJavaNodejsPythonReactReact NativeRustUnityVueWeb
Picovoice SDK - FAQ
IntroductionSTM32F407G-DISC1 (Arm Cortex-M4)STM32F411E-DISCO (Arm Cortex-M4)STM32F769I-DISCO (Arm Cortex-M7)IMXRT1050-EVKB (Arm Cortex-M7)
FAQGlossary

Rhino Speech-to-Intent
Flutter Quick Start


Platforms

  • Flutter (1.20.0+)
  • Android (44+, API 19+)
  • iOS (9.0+)

Requirements

  • Flutter SDK
  • Android SDK (16+)
  • JDK (8+)
  • Xcode (9+)

Picovoice Account & AccessKey

Signup or Login to Picovoice Console to get your AccessKey. Make sure to keep your AccessKey secret.

Quick Start

Setup

  1. Install Flutter SDK.

  2. Run flutter doctor to determine any missing requirements.

  3. Add the Rhino plugin to your app project by referencing it in pubspec.yaml:

dependencies:
rhino_flutter: ^<version>
  1. Enable the proper permission for recording with the hardware's microphone on both iOS and Android:

iOS

Open your Info.plist and add the following line:

<key>NSMicrophoneUsageDescription</key>
<string>[Permission explanation]</string>

Android

Open your AndroidManifest.xml and add the following line:

<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.INTERNET" />

Usage

  1. Add the context file (either a pre-built context file (.rhn) from the Rhino GitHub Repository or a custom context created with the Picovoice Console) to the assets folder in the project directory.

  2. Add the path to the pubspec.yaml:

flutter:
assets:
- assets/${CONTEXT_FILE}
  1. Create an instance of RhinoManager using the constructor RhinoManager.create that infers custom commands:
import 'package:rhino/rhino_manager.dart';
import 'package:rhino/rhino_error.dart';
try {
_rhinoManager = await RhinoManager.create(
"{ACCESS_KEY}"
"assets/{CONTEXT_FILE}",
_inferenceCallback);
} on RhinoException catch (err) {
// handle rhino init error
}

The _inferenceCallback parameter is a function that is invoked when Rhino made an intent inference.

void _infererenceCallback(RhinoInference inference) {
if(inference.isUnderstood!){
String intent = inference.intent!
Map<String, String> slots = inference.slots!
// take action based on inferred intent and slot values
}
else {
// handle unsupported commands
}
}

Start audio capture and intent inference:

try{
await _rhinoManager.process();
} on RhinoException catch (ex) {
// deal with either audio exception
}

Once an inference has been made, the _inferenceCallback will be invoked and audio capture will stop automatically. Release resources explicitly when done with Rhino:

await _rhinoManager.delete();
To use your own audio processing pipeline, check out the Low-Level Rhino API.

Custom Contexts

  1. Create custom contexts with the Picovoice Console. Download the custom context file (.rhn).

  2. Add the file to the assets folder in the project directory.

  3. Add it to the pubspec.yaml:

flutter:
assets:
- assets/${CUSTOM_CONTEXT_FILE}
  1. Create an instance of RhinoManager using the constructor RhinoManager.create with the custom context.

Alternatively, if the context file is deployed to the device with a different method, the absolute path to the file on device can be used.

Non-English Languages

Use the corresponding model file (.pv) to detect non-English wake words. The model files for all supported languages are available on the Rhino GitHub repository.

  1. Add the file to the assets folder in the project directory.
  2. Add it to the pubspec.yaml:
flutter:
assets:
- assets/${CUSTOM_CONTEXT_FILE}
- assets/${MODEL_FILE}
  1. Pass in the model file using the modelPath parameter to change the language:
try{
_rhinoManager = await RhinoManager.create(
"{ACCESS_KEY}"
"assets/{CUSTOM_CONTEXT_FILE}",
_inferenceCallback,
modelPath: "assets/{MODEL_FILE}");
} on RhinoException catch (err) {
// handle rhino init error
}

Alternatively, if the model file is deployed to the device with a different method, the absolute path to the file on device can be used.

Demo

For the Rhino Flutter SDK, we offer demo applications that demonstrate how to use the Speech-to-Intent engine on real-time audio streams (i.e. microphone input).

Setup

Clone the Rhino GitHub Repository:

git clone --recurse-submodules https://github.com/Picovoice/rhino.git

Usage

  1. Replace {YOUR_ACCESS_KEY_HERE} with a valid AccessKey in the demo/flutter/lib/main.dart file:

  2. Copy assets:

cd rhino/demo/flutter
bash copy_assets.sh

NOTE: on Windows, Git Bash or another bash shell is required, or you will have to manually copy

  • The android context into ${DEMO_FOLDER}/assets/contexts/android.
  • The iOS context into ${DEMO_FOLDER}/assets/contexts/ios.
  1. Build and deploy the demo to your device:
flutter run

For more information on our Rhino demos for Flutter, head over to our GitHub repository.

Resources

Package

  • rhino_flutter on pub.dev

API

  • rhino_flutter API Docs

GitHub

  • Rhino Flutter SDK on GitHub
  • Rhino Flutter Demos on GitHub

Benchmark

  • Speech-to-Intent Benchmark

Further Reading

  • Offline Speech Recognition in Flutter: No Siri, No Google, and No, It’s Not Speech-To-Text

Video

  • Offline Speech Recognition with Flutter (iOS/Android)

Was this doc helpful?

Issue with this doc?

Report a GitHub Issue
Rhino Speech-to-Intent Flutter Quick Start
  • Platforms
  • Requirements
  • Picovoice Account & AccessKey
  • Quick Start
  • Setup
  • Usage
  • Custom Contexts
  • Non-English Languages
  • Demo
  • Setup
  • Usage
  • Resources
Platform
  • Leopard Speech-to-Text
  • Cheetah Streaming Speech-to-Text
  • Octopus Speech-to-Index
  • Porcupine Wake Word
  • Rhino Speech-to-Intent
  • Cobra Voice Activity Detection
Resources
  • Docs
  • Console
  • Blog
  • Demos
Sales
  • Pricing
  • Starter Tier
  • Enterprise
Company
  • Careers
Follow Picovoice
  • LinkedIn
  • GitHub
  • Twitter
  • Medium
  • YouTube
  • AngelList
Subscribe to our newsletter
Terms of Use
Privacy Policy
© 2019-2022 Picovoice Inc.