Picovoice WordmarkPicovoice Console
Introduction
Introduction
AndroidC.NETFlutterlink to GoiOSJavaNvidia JetsonLinuxmacOSNodejsPythonRaspberry PiReact NativeRustWebWindows
AndroidC.NETFlutterlink to GoiOSJavaNodejsPythonReact NativeRustWeb
SummaryPicovoice LeopardAmazon TranscribeAzure Speech-to-TextGoogle ASRGoogle ASR (Enhanced)IBM Watson Speech-to-Text
FAQ
Introduction
AndroidC.NETFlutterlink to GoiOSJavaNodejsPythonReact NativeRustWeb
AndroidC.NETFlutterlink to GoiOSJavaNodejsPythonReact NativeRustWeb
FAQ
Introduction
AndroidCiOSLinuxmacOSPythonWebWindows
AndroidCiOSPythonWeb
SummaryOctopus Speech-to-IndexGoogle Speech-to-TextMozilla DeepSpeech
FAQ
Introduction
AndroidAngularArduinoBeagleBoneCChrome.NETEdgeFirefoxFlutterlink to GoiOSJavaNvidia JetsonLinuxmacOSMicrocontrollerNodejsPythonRaspberry PiReactReact NativeRustSafariUnityVueWebWindows
AndroidAngularC.NETFlutterlink to GoiOSJavaMicrocontrollerNodejsPythonReactReact NativeRustUnityVueWeb
SummaryPorcupineSnowboyPocketSphinx
Wake Word TipsFAQ
Introduction
AndroidAngularBeagleBoneCChrome.NETEdgeFirefoxFlutterlink to GoiOSJavaNvidia JetsonlinuxmacOSNodejsPythonRaspberry PiReactReact NativeRustSafariUnityVueWebWindows
AndroidAngularC.NETFlutterlink to GoiOSJavaNodejsPythonReactReact NativeRustUnityVueWeb
SummaryPicovoice RhinoGoogle DialogflowAmazon LexIBM WatsonMicrosoft LUIS
Expression SyntaxFAQ
Introduction
AndroidBeagleboneCiOSNvidia JetsonLinuxmacOSPythonRaspberry PiRustWebWindows
AndroidCiOSPythonRustWeb
SummaryPicovoice CobraWebRTC VAD
FAQ
Introduction
AndroidAngularArduinoBeagleBoneC.NETFlutterlink to GoiOSJavaNvidia JetsonMicrocontrollerNodejsPythonRaspberry PiReactReact NativeRustUnityVueWeb
AndroidAngularCMicrocontroller.NETFlutterlink to GoiOSJavaNodejsPythonReactReact NativeRustUnityVueWeb
Picovoice SDK - FAQ
IntroductionSTM32F407G-DISC1 (Arm Cortex-M4)STM32F411E-DISCO (Arm Cortex-M4)STM32F769I-DISCO (Arm Cortex-M7)IMXRT1050-EVKB (Arm Cortex-M7)
FAQGlossary

Picovoice Platform
Flutter Quick Start


Platforms

  • Flutter (1.20.0+)
  • Android (4.4+, API 19+)
  • iOS (9.0+)

Requirements

  • Flutter SDK
  • Android SDK (16+)
  • JDK (8+)
  • Xcode (9+)

Picovoice Account & AccessKey

Signup or Login to Picovoice Console to get your AccessKey. Make sure to keep your AccessKey secret.

Quick Start

Setup

  1. Install Flutter SDK.

  2. Run flutter doctor to determine any missing requirements.

  3. Add the Picovoice plugin to your app project by referencing it in pubspec.yaml:

dependencies:
picovoice_flutter: ^<version>
  1. Enable the proper permission for recording with the hardware's microphone on both iOS and Android:

iOS

Open your Info.plist and add the following line:

<key>NSMicrophoneUsageDescription</key>
<string>[Permission explanation]</string>

Android

Open your AndroidManifest.xml and add the following line:

<uses-permission android:name="android.permission.RECORD_AUDIO"/>
<uses-permission android:name="android.permission.INTERNET"/>

Usage

  1. Add a Porcupine keyword file (.ppn) and a Rhino context file (.rhn) to the assets folder in the project directory.

  2. Add it to the pubspec.yaml:

flutter:
assets:
- assets/${KEYWORD_FILE}
- assets/${CONTEXT_FILE}
  1. Create an instance of PicovoiceManager using the constructor PicovoiceManager.create() that detects the wake word and infers intents from spoken commands:
import 'package:picovoice/picovoice_manager.dart';
import 'package:picovoice/picovoice_error.dart';
_picovoiceManager = PicovoiceManager.create(
"{ACCESS_KEY}",
"assets/${KEKWORD_FILE}",
_wakeWordCallback,
"assets/${CONTEXT_FILE}",
_inferenceCallback);

Alternatively, if the model files are deployed to the device with a different method, the absolute paths to the files on device can be used.

The _wakeWordCallback and _inferenceCallback parameters are functions that are invoked when Porcupine detects the wake word and Rhino makes an intent inference, respectively.

void _wakeWordCallback(){
// wake word detected
}
void _infererenceCallback(RhinoInference inference){
if(inference.isUnderstood!){
String intent = inference.intent!
Map<String, String> slots = inference.slots!
// take action based on inferred intent and slot values
}
else{
// handle unsupported commands
}
}

Start audio capture and processing:

try{
await _picovoiceManager.start();
} on PicovoiceException catch(ex){
// deal with Picovoice init error
}

Stop it when done with Picovoice:

await _picovoiceManager.stop();
To use your own audio processing pipeline, check out the Picovoice Low-Level API.

Custom Wake Words & Contexts

  1. Create custom wake words and contexts with the Picovoice Console.

  2. Download the custom Porcupine keyword (.ppn) and Rhino context (.rhn) files.

  3. Add them to the pubspec.yaml:

flutter:
assets:
- assets/${CUSTOM_KEYWORD_FILE}
- assets/${CUSTOM_CONTEXT_FILE}
  1. Create an instance of PicovoiceManager using the constructor PicovoiceManager.create().

Non-English Languages

Use the corresponding model file (.pv) to infer non-English wake words and contexts. The model files for all supported languages are available on the Porcupine GitHub repository and Rhino GitHub repository.

  1. Add the model files to the assets folder in the project directory as well as keyword and context files.
  2. Add it to the pubspec.yaml:
flutter:
assets:
- assets/${CUSTOM_KEYWORD_FILE}
- assets/${CUSTOM_CONTEXT_FILE}
- assets/${PORCUPINE_MODEL_FILE}
- assets/${RHINO_MODEL_FILE}
  1. Create an instance of PicovoiceManager using the constructor PicovoiceManager.create():
_picovoiceManager = PicovoiceManager.create(
"{ACCESS_KEY}",
"assets/{CUSTOM_KEKWORD_FILE}",
_wakeWordCallback,
"assets/{CUSTOM_CONTEXT_FILE}",
_inferenceCallback,
porcupineModelPath="assets/{PORCUPINE_MODEL_FILE}",
rhinoModelPath="assets/{RHINO_MODEL_FILE}");

Alternatively, if the model files are deployed to the device with a different method, the absolute paths to the files on device can be used.

Demo

For the Picovoice Flutter SDK, we offer demo applications that demonstrate how to use the End-to-End Picovoice platform on real-time audio streams (i.e. microphone input).

Setup

Clone the Picovoice GitHub Repository:

git clone --recurse-submodules https://github.com/Picovoice/picovoice.git

Usage

  1. Replace {YOUR_ACCESS_KEY_HERE} with a valid AccessKey in the demo/flutter/lib/main.dart file:

  2. Copy assets:

cd picovoice/demo/flutter
bash copy_assets.sh

NOTE: on Windows, Git Bash or another bash shell is required, or you will have to manually copy

  • The android keyword into ${DEMO_FOLDER}/assets/keyword_files/android.
  • The android context into ${DEMO_FOLDER}/assets/contexts/android.
  • The iOS keyword into ${DEMO_FOLDER}/assets/keyword_files/ios.
  • The iOS context into ${DEMO_FOLDER}/assets/contexts/ios.
  1. Build and deploy the demo to your device:
flutter run

For more information on our Picovoice demos for Flutter, head over to our GitHub repository.

Resources

Package

  • picovoice_flutter on pub.dev

API

  • picovoice_flutter API Docs

GitHub

  • Picovoice Flutter SDK on GitHub
  • Picovoice Flutter Demos on GitHub

Benchmarks

  • Wake Word Benchmark
  • Speech-to-Intent Benchmark

Further Reading

  • Offline Speech Recognition in Flutter: No Siri, No Google, and No, It’s Not Speech-To-Text

Video

  • Offline Speech Recognition with Flutter (iOS/Android)

Was this doc helpful?

Issue with this doc?

Report a GitHub Issue
Picovoice Platform Flutter Quick Start
  • Platforms
  • Requirements
  • Picovoice Account & AccessKey
  • Quick Start
  • Setup
  • Usage
  • Custom Wake Words & Contexts
  • Non-English Languages
  • Demo
  • Setup
  • Usage
  • Resources
Platform
  • Leopard Speech-to-Text
  • Cheetah Streaming Speech-to-Text
  • Octopus Speech-to-Index
  • Porcupine Wake Word
  • Rhino Speech-to-Intent
  • Cobra Voice Activity Detection
Resources
  • Docs
  • Console
  • Blog
  • Demos
Sales
  • Pricing
  • Starter Tier
  • Enterprise
Company
  • Careers
Follow Picovoice
  • LinkedIn
  • GitHub
  • Twitter
  • Medium
  • YouTube
  • AngelList
Subscribe to our newsletter
Terms of Use
Privacy Policy
© 2019-2022 Picovoice Inc.