Picovoice Wordmark
Start Building
Introduction
Introduction
AndroidC.NETiOSLinuxmacOSNode.jsPythonRaspberry PiWebWindows
AndroidC.NETiOSNode.jsPythonWeb
SummaryPicovoice picoLLMGPTQ
Introduction
AndroidC.NETFlutteriOSJavaLinuxmacOSNode.jsPythonRaspberry PiReactReact NativeRustWebWindows
AndroidC.NETFlutteriOSJavaNode.jsPythonReactReact NativeRustWeb
SummaryPicovoice LeopardAmazon TranscribeAzure Speech-to-TextGoogle ASRGoogle ASR (Enhanced)IBM Watson Speech-to-TextWhisper Speech-to-Text
FAQ
Introduction
AndroidC.NETFlutteriOSJavaLinuxmacOSNode.jsPythonRaspberry PiReactReact NativeRustWebWindows
AndroidC.NETFlutteriOSJavaNode.jsPythonReactReact NativeRustWeb
SummaryPicovoice Cheetah
FAQ
Introduction
AndroidC.NETiOSLinuxmacOSNode.jsPythonRaspberry PiWebWindows
AndroidC.NETiOSNode.jsPythonWeb
SummaryAmazon PollyAzure TTSElevenLabsOpenAI TTSPicovoice Orca
Introduction
AndroidCiOSLinuxmacOSPythonRaspberry PiWebWindows
AndroidCiOSPythonWeb
SummaryPicovoice KoalaMozilla RNNoise
Introduction
AndroidCiOSLinuxmacOSNode.jsPythonRaspberry PiWebWindows
AndroidCNode.jsPythoniOSWeb
SummaryPicovoice EaglepyannoteSpeechBrainWeSpeaker
Introduction
AndroidCiOSLinuxmacOSPythonRaspberry PiWebWindows
AndroidCiOSPythonWeb
SummaryPicovoice FalconAmazon TranscribeAzure Speech-to-TextGoogle Speech-to-Textpyannote
Introduction
AndroidArduinoCChrome.NETEdgeFirefoxFlutteriOSJavaLinuxmacOSMicrocontrollerNode.jsPythonRaspberry PiReactReact NativeRustSafariUnityWebWindows
AndroidC.NETFlutteriOSJavaMicrocontrollerNode.jsPythonReactReact NativeRustUnityWeb
SummaryPorcupineSnowboyPocketSphinx
Wake Word TipsFAQ
Introduction
AndroidCChrome.NETEdgeFirefoxFlutteriOSJavaLinuxmacOSNode.jsPythonRaspberry PiReactReact NativeRustSafariUnityWebWindows
AndroidC.NETFlutteriOSJavaNode.jsPythonReactReact NativeRustUnityWeb
SummaryPicovoice RhinoGoogle DialogflowAmazon LexIBM WatsonMicrosoft LUIS
Expression SyntaxFAQ
Introduction
AndroidC.NETiOSLinuxmacOSNode.jsPythonRaspberry PiRustWebWindows
AndroidC.NETiOSNode.jsPythonRustWeb
SummaryPicovoice CobraWebRTC VAD
FAQ
Introduction
AndroidC.NETFlutteriOSNode.jsPythonReact NativeRustUnityWeb
AndroidC.NETFlutteriOSNode.jsPythonReact NativeRustUnityWeb
Introduction
C.NETNode.jsPython
C.NETNode.jsPython
FAQGlossary

Rhino Speech-to-Intent
iOS Quick Start

Platforms

  • iOS (13.0+)

Requirements

  • Xcode
  • Swift Package Manager or CocoaPods

Picovoice Account & AccessKey

Signup or Login to Picovoice Console to get your AccessKey. Make sure to keep your AccessKey secret.

Quick Start

Setup

  1. Install Xcode.

  2. Import the Rhino-iOS package into your project.

To import the package using SPM, open up your project's Package Dependencies in XCode and add:

https://github.com/Picovoice/rhino.git

To import it into your iOS project using CocoaPods, add the following line to your Podfile:

pod 'Rhino-iOS'

Then, run the following from the project directory:

pod install
  1. Add the following to the app's Info.plist file to enable recording with an iOS device's microphone
<key>NSMicrophoneUsageDescription</key>
<string>[Permission explanation]</string>

Usage

Include the context file (either a pre-built context file (.rhn) from the Rhino Speech-to-Intent GitHub Repository or a custom context created with the Picovoice Console) in the app as a bundled resource (found by selecting in Build Phases > Copy Bundle Resources). Then, get its path from the app bundle:

let contextPath = Bundle.main.path(forResource: "${CONTEXT_FILE}", ofType: "rhn")

Create an instance of RhinoManager that infers custom commands:

import Rhino
do {
let rhinoManager = try RhinoManager(
accessKey: "${ACCESS_KEY}",
contextPath: contextPath,
onInferenceCallback: inferenceCallback)
} catch { }

The onInferenceCallback parameter is a function that will be invoked when Rhino Speech-to-Intent has returned an inference result:

let inferenceCallback: ((Inference) -> Void) = { inference in
if inference.isUnderstood {
let intent:String = inference.intent
let slots:Dictionary<String,String> = inference.slots
// take action based on inferred intent and slot values
} else {
// handle unsupported commands
}
}
}

Start audio capture:

do {
try rhinoManager.process()
} catch { }

Once an inference has been made, the inferenceCallback will be invoked and audio capture will stop automatically.

Release resources explicitly when done with Rhino Speech-to-Intent:

rhinoManager.delete()
To use your own audio processing pipeline, check out the Low-Level Rhino API.

Custom Contexts

Create custom contexts with the Picovoice Console. Download the custom context file (.rhn) and include it in the app as a bundled resource (found by selecting in Build Phases > Copy Bundle Resources).

Alternatively, if the context file is deployed to the device with a different method, the absolute path to the file on device can be used.

Non-English Languages

Use the corresponding model file (.pv) to infer non-English commands. The model files for all supported languages are available on the Rhino Speech-to-Intent GitHub repository.

Pass in the model file using the modelPath input argument to change the inference language:

let modelPath = Bundle.main.path(forResource: "${MODEL_FILE}", ofType: "pv")
do {
let rhinoManager = try RhinoManager(
accessKey: "${ACCESS_KEY}",
contextPath: contextPath,
modelPath: modelPath,
onInferenceCallback: inferenceCallback)
} catch { }

Alternatively, if the model file is deployed to the device with a different method, the absolute path to the file on device can be used.

Demo

For the Rhino Speech-to-Intent iOS SDK, we offer demo applications that demonstrate how to use the Speech-to-Intent engine on real-time audio streams (i.e. microphone input).

Setup

Clone the Repository:

git clone --recurse-submodules https://github.com/Picovoice/rhino.git

Usage

  1. Install dependencies:
cd rhino/demo/ios/
pod install
  1. Open the RhinoDemo.xcworkspace.

  2. Replace "${YOUR_ACCESS_KEY_HERE}" in the file ContentView.swift with a valid AccessKey.

  3. Go to Product > Scheme and select the scheme for the language you would like to demo (e.g. esDemo -> Spanish Demo, deDemo -> German Demo).

  4. Run the demo with a simulator or connected iOS device.

For more information on our Rhino Speech-to-Intent demos for iOS, head over to our GitHub repository.

Resources

Package

  • Rhino-iOS on Cocoapods

API

  • Rhino-iOS API Docs

GitHub

  • Rhino Speech-to-Intent iOS SDK on GitHub
  • Rhino Speech-to-Intent iOS Demos on GitHub

Benchmark

  • Speech-to-Intent Benchmark

Further Reading

  • Siri Gets a Barista Job: Adding Offline Voice AI to a SwiftUI App

Was this doc helpful?

Issue with this doc?

Report a GitHub Issue
Rhino Speech-to-Intent iOS Quick Start
  • Platforms
  • Requirements
  • Picovoice Account & AccessKey
  • Quick Start
  • Setup
  • Usage
  • Custom Contexts
  • Non-English Languages
  • Demo
  • Setup
  • Usage
  • Resources
Voice AI
  • Leopard Speech-to-Text
  • Cheetah Streaming Speech-to-Text
  • Orca Text-to-Speech
  • Koala Noise Suppression
  • Eagle Speaker Recognition
  • Falcon Speaker Diarization
  • Porcupine Wake Word
  • Rhino Speech-to-Intent
  • Cobra Voice Activity Detection
Local LLM
  • picoLLM Inference
  • picoLLM Compression
  • picoLLM GYM
Resources
  • Docs
  • Console
  • Blog
  • Use Cases
  • Playground
Sales & Services
  • Consulting
  • Foundation Plan
  • Enterprise Plan
  • Enterprise Support
Company
  • About us
  • Careers
Follow Picovoice
  • LinkedIn
  • GitHub
  • X
  • YouTube
  • AngelList
Subscribe to our newsletter
Terms of Use
Privacy Policy
© 2019-2025 Picovoice Inc.