iosPorcupine - iOS API

  • Wake Word Engine
  • Offline Voice Commands
  • Local Speech Recognition
  • Always Listening
  • iOS
  • Swift
  • C

This document outlines how to integrate Porcupine wake word engine within an iOS application.


Porcupine is implemented in ANSI C and is shipped as a precompiled library accompanied by corresponding header file. To integrate within an iOS application the following items are needed

  • Precompiled library
  • Header file
  • The model file. The standard model is freely available on Porcupine's GitHub repository. Enterprises who are commercially engaged with Picovoice can access compressed and standard models as well.
  • Keyword file(s) for your use case. A set of freely-available keyword files can be found on Porcupine's GitHub repository. Enterprises who are engaged with Picovoice can create custom wake word models using Picovoice Console.


There are two approaches for integrating Porcupine into an iOS application.


Porcupine is shipped as a precompiled ANSI C library and can directly be used in Swift using module maps. It can be initialized to detect multiple wake words concurrently using:

let modelFilePath: String = ...
let keywordFilePaths: [String] = ["path/to/keyword/1", "path/to/keyword/2", ...]
let sensitivities: [Float] = [0.3, 0.7, ...];
var handle: OpaquePointer?
let status = pv_porcupine_init(
Int32(keywordFilePaths.count),{ UnsafePointer(strdup($0)) },
if status != PV_STATUS_SUCCESS {
// error handling logic

Then handle can be used to monitor incoming audio stream.

func getNextAudioFrame() -> UnsafeMutablePointer<Int16> {
while true {
let pcm = getNextAudioFrame()
var keyword_index: Int32 = -1
let status = pv_porcupine_process(handle, pcm, &keyword_index)
if status != PV_STATUS_SUCCESS {
// error handling logic
if keyword_index >= 0 {
// detection event logic/callback

When finished, release the resources via



The PorcupineManager class manages all activities related to creating an input audio stream, feeding it into Porcupine's library, and invoking a user-provided detection callback. The class can be initialized as below:

let modelPath: String = ...
let keywordPaths: [String] = ...
let sensitivities: [String] = ...
let onDetection: ((Int32) -> Void) = {
// detection event callback
let manager = try PorcupineManager(
modelPath: modelPath,
keywordPaths: keywordPaths,
sensitivities: sensitivities,
onDetection: onDetection)

When initialized, input audio can be monitored using manager.start(). When done be sure to stop the manager using manager.stop().

Issue with this doc? Please let us know.