iosPicovoice Platform — iOS Quick Start

Platforms

  • iOS (9.0+)

Requirements

Picovoice Account & AccessKey

  1. Login or signup for a free account on the Picovoice Console.
  2. Go to the AccessKey tab to create one or use an existing AccessKey. Be sure to keep your AccessKey secret.

Quick Start

Setup

  1. Install Xcode.

  2. Install CocoaPods

  3. Import the Picovoice-iOS binding by adding the following line to Podfile:

pod 'Picovoice-iOS'
  1. Run the following from the project directory:
pod install
  1. Add the following to the app's Info.plist file to enable recording with an iOS device's microphone
<key>NSMicrophoneUsageDescription</key>
<string>[Permission explanation]</string>

Usage

Include a Porcupine keyword file (.ppn) and a Rhino context file (.rhn) in the app as a bundled resource (found by selecting in Build Phases > Copy Bundle Resources). Then, get its path from the app bundle:

let keywordPath = Bundle.main.path(forResource: "${KEYWORD_IOS}", ofType: "ppn")
let contextPath = Bundle.main.path(forResource: "${CONTEXT_IOS}", ofType: "rhn")

Create an instance of PicovoiceManager that detects the wake word and infers intents from spoken commands:

import Picovoice
let manager = PicovoiceManager(
accessKey: "${ACCESS_KEY}",
keywordPath: keywordPath,
onWakeWordDetection: {
// logic to execute upon deletection of wake word
},
contextPath: contextPath,
onInference: { inference in
if inference.isUnderstood {
let intent:String = inference.intent
let slots:Dictionary<String,String> = inference.slots
// take action based on inferred intent and slot values
} else {
// handle unsupported commands
}
}
)

Start audio capture:

do {
try manager.start()
} catch { }

Stop audio capture and processing with:

manager.stop()

Custom Wake Words & Contexts

Create custom models using the Picovoice Console. Download the custom Porcupine keyword (.ppn) and Rhino context (.rhn) files and include them in the app as a bundled resource (found by selecting in Build Phases > Copy Bundle Resources).

Non-English Languages

Use the corresponding model file (.pv) to infer non-English wake words and contexts. The model files for all supported languages are available on the Porcupine GitHub repository and Porcupine GitHub repository.

Pass in the model file using the porcupineModelPath and rhinoModelPath input arguments to change the language:

let porcupineModelPath = Bundle.main.path(forResource: "${PORCUPINE_MODEL_FILE}", ofType: "pv")
let rhinoModelPath = Bundle.main.path(forResource: "${RHINO_MODEL_FILE}", ofType: "pv")
let manager = PicovoiceManager(
accessKey: "${ACCESS_KEY}",
keywordPath: keywordPath,
onWakeWordDetection: {
// logic to execute upon deletection of wake word
},
porcupineModelPath: porcupineModelPath,
contextPath: contextPath,
onInference: { inference in
// logic to execute upon completion of intent inference
}
rhinoModelPath: rhinoModelPath
)

Demo

For the Picovoice iOS SDK, we offer demo applications that demonstrate how to use the end-to-end speech recognition platform on real-time audio streams (i.e. microphone input).

Setup

Clone the Repository

git clone --recurse-submodules https://github.com/Picovoice/picovoice.git

Usage

  1. Install dependencies:
cd demo/ios/ForegroundApp
pod install
  1. Replace let ACCESS_KEY = "${YOUR_ACCESS_KEY_HERE}" in the file ContentView.swift with a valid AccessKey.

  2. Open the PicovoiceForegroundAppDemo.xcworkspace and run the demo.

For more information on our Picovoice demos for iOS, head over to our Github repository.

Resources

Packages

API

GitHub

Benchmark

Further Reading


Issue with this doc? Please let us know.