While Flutter offers the convenience of deploying apps pretty much anywhere, many developers get stumped when they require certain device-specific capabilities. Speech-to-Text is one such feature that becomes too difficult to add to a cross-platform framework. Picovoice’s Leopard Speech-to-Text SDK for Flutter allows us to add cross-platform, on-device speech recognition to a Flutter app with minimal code. With on-device speech-to-text, user audio will not leave the device to be transcribed in the cloud, giving us the benefit of significantly reduced latency and our users the benefit of increased privacy.
Setting Up a Flutter Project
Create a new Flutter project or open an existing one, and ensure you have the necessary permissions enabled for each platform:
Android Permissions
For Android, add the following to your AndroidManifest.xml
:
Internet permissions are only required for Picovoice AccessKey
validation - audio will not be streamed.
iOS Permissions
For iOS, add the following to your Info.plist
:
To incorporate Leopard Speech-to-Text into your Flutter project, you’ll need to add the leopard_flutter
plugin as a dependency. Open your project's pubspec.yaml
file and add the following:
Place the desired language model into your project’s assets/
folder and add the file to your pubspec.yaml
:
Lastly, you will need a Picovoice AccessKey
, which can obtained with a free Picovoice Console account.
Transcribing Audio with Leopard Speech-to-Text
Import leopard_flutter
and create an instance of the Leopard
class:
Now, let’s assume we’re buffering audio data from the device’s microphone elsewhere in the app (we’ll implement this in the next section). We’ll take this PCM array and pass it straight into Leopard
to be converted to text:
The LeopardTranscript
also contains some useful word metadata. The start position, end position and confidence are generated for each word:
How to Record Audio in Flutter
As with many other cross-platform frameworks, recording media in Flutter can be challenging. It requires native implementations, unified by an interface that can be used by Flutter. To simplify the process for our own demos, we created an audio capture plugin called flutter_voice_processor
that will handle all the complexity for us. Add the plugin to your pubspec.yaml
and then add the following code to start buffering audio data:
Putting It All Together
This is a simplified example, but contains all the necessary code to get started. For a complete demo application, see the Leopard Speech-to-Text Flutter demo on our GitHub repository.
Real-time Transcription
You may have noticed that Leopard Speech-to-Text engine operates on chunks of buffered audio. For most speech recognition applications, this is the use-case we’ll be dealing with. However, if we are interested in providing live feedback to the user, we’ll need to use a real-time transcription engine. Picovoice’s Cheetah Streaming Speech-to-Text engine allows for streaming of transcription results while capturing audio. Check out the cheetah_flutter
plugin if real-time feedback is a requirement for your project.
Cross-Platform Alternatives to Flutter
If Flutter is not your cross-platform framework of choice, there are also React Native SDKs for both Leopard and Cheetah. Cross-platform desktop and web are also supported through SDKs in Python, .NET, and JavaScript, to name a few. Check out the Leopard and Cheetah docs to see all the available SDKs and a wide array of helpful demos.