flutterPorcupine - Flutter API

  • Wake Word Detection
  • Local Voice Commands
  • Offline Keyword Spotting
  • Always Listening
  • Voice Activation
  • Flutter
  • Dart
  • Mobile
  • Android
  • iOS

This document outlines how to integrate Porcupine wake word engine within an application using its Flutter API.


  • Flutter SDK
  • Android SDK 16+
  • JDK 8+
  • Xcode 9+

Follow Flutter's guide to install the Flutter SDK on your system. Once installed, you can run flutter doctor to determine any other missing requirements.


  • Flutter 1.20.0
  • Android 4.1+ (API 16+)
  • iOS 9.0+


To add the Porcupine plugin to your app project, you can reference it in your pub.yaml:

porcupine: ^<version>

If you prefer to clone the repo and use it locally, first run copy_resources.sh (NOTE: on Windows, Git Bash or another bash shell is required, or you will have to manually copy the libs into the project.). Then you can reference the local binding location:

path: /path/to/porcupine/flutter/binding


To enable recording with the hardware's microphone, you must first ensure that you have enabled the proper permission on both iOS and Android.

On iOS, open your Info.plist and add the following line:

<string>[Permission explanation]</string>

On Android, open your AndroidManifest.xml and add the following line:

<uses-permission android:name="android.permission.RECORD_AUDIO" />


The module provides you with two levels of API to choose from depending on your needs.

High-Level API

PorcupineManager provides a high-level API that takes care of audio recording. This class is the quickest way to get started.

Using the constructor PorcupineManager.fromKeywords will create an instance of the PorcupineManager using one or more of the built-in keywords.

import 'package:porcupine/porcupine_manager.dart';
import 'package:porcupine/porcupine_error.dart';
void createPorcupineManager() async {
_porcupineManager = await PorcupineManager.fromKeywords(
["picovoice", "porcupine"],
} on PvError catch (err) {
// handle porcupine init error

NOTE: the call is asynchronous and therefore should be called in an async block with a try/catch.

The wakeWordCallback parameter is a function that you want to execute when Porcupine has detected one of the keywords. The function should accept a single integer, keywordIndex, which specifies which wake word has been detected.

void _wakeWordCallback(int keywordIndex){
if(keywordIndex == 0){
// picovoice detected
else if (keywordIndex === 1){
// porcupine detected

Available built-in keywords are stored in the constants PorcupineManager.BUILT_IN_KEYWORDS and Porcupine.BUILT_IN_KEYWORDS.

To create an instance of PorcupineManager that detects custom keywords, you can use the PorcupineManager.fromKeywordPaths static constructor and provide the paths to the .ppn file(s).

_porcupineManager = await PorcupineManager.fromKeywordPaths(

In addition to custom keywords, you can override the default Porcupine model file and/or keyword sensitivities, as well as add an error callback you want to trigger if there's a problem encountered while Porcupine is processing frames.

These optional parameters can be passed in like so:

_porcupineManager = await PorcupineManager.fromKeywordPaths(
["/path/to/keyword/file/one.ppn", "/path/to/keyword/file/two.ppn"],
modelPath: 'path/to/model/file.pv',
sensitivities: [0.25, 0.6],
errorCallback: _errorCallback);
void _errorCallback(PvError error){

Once you have instantiated a PorcupineManager, you can start audio capture and wake word detection by calling:

await _porcupineManager.start();
} on PvAudioException catch (ex) {
// deal with either audio exception

And then stop it by calling:

await _porcupineManager.stop();

Once the app is done with using an instance of PorcupineManager, be sure you explicitly release the resources allocated to Porcupine:


There is no need to deal with audio capture to enable wake word detection with PorcupineManager. This is because it uses our flutter_voice_processor Flutter plugin to capture frames of audio and automatically pass it to the wake word engine.

Low-Level API

Porcupine provides low-level access to the wake word engine for those who want to incorporate wake word detection into a already existing audio processing pipeline.

Porcupine also has fromKeywords and fromKeywordPaths static constructors.

import 'package:porcupine/porcupine_manager.dart';
import 'package:porcupine/porcupine_error.dart';
void createPorcupine() async {
_porcupine = await Porcupine.fromKeywords(["picovoice"]);
} on PvError catch (err) {
// handle porcupine init error

To search for a keyword in audio, you must pass frames of audio to Porcupine using the process function. The keywordIndex returned will either be -1 if no detection was made or an integer specifying which keyword was detected.

List<int> buffer = getAudioFrame();
try {
int keywordIndex = _porcupine.process(buffer);
if (keywordIndex >= 0) {
// detection made!
} on PvError catch (error) {
// handle error

For process to work correctly, the audio data must be in the audio format required by Picovoice. The required audio format is found by calling .sampleRate to get the required sample rate and .frameLength to get the required frame size. Audio must be single-channel and 16-bit linearly-encoded.

Finally, once you no longer need the wake word engine, be sure to explicitly release the resources allocated to Porcupine:


Custom Wake Word

You can create custom Porcupine wake word models using Picovoice Console.

Custom Wake Word Integration

To add a custom wake word to your Flutter application, first add it to an assets folder in your project directory. Then add them to you your pubspec.yaml:

- assets/keyword.ppn

In your Flutter code, using the path_provider plugin, extract the asset files to your device like so:

String keywordAsset = "assets/keyword.ppn"
String extractedKeywordPath = await _extractAsset(keywordAsset);
// create Porcupine
// ...
Future<String> _extractAsset(String resourcePath) async {
// extraction destination
String resourceDirectory = (await getApplicationDocumentsDirectory()).path;
String outputPath = '$resourceDirectory/$resourcePath';
File outputFile = new File(outputPath);
ByteData data = await rootBundle.load(resourcePath);
final buffer = data.buffer;
await outputFile.create(recursive: true);
await outputFile.writeAsBytes(
buffer.asUint8List(data.offsetInBytes, data.lengthInBytes));
return outputPath;

Non-English Wake Words

In order to detect non-English wake words you need to use the corresponding model file. The model files for all supported languages are available here.

Issue with this doc? Please let us know.