picoLLM Inference Engine
iOS Quick Start
Platforms
- iOS (16.0+)
Requirements
Picovoice Account & AccessKey
Signup or Login to Picovoice Console to get your AccessKey
.
Make sure to keep your AccessKey
secret.
Quick Start
Setup
Install Xcode.
Install CocoaPods
Import the picoLLM-iOS binding by adding the following line to your
Podfile
:
- Run the following from the project directory:
- Download a
picoLLM
model file (.pllm
) from Picovoice Console.
Model File Deployment
To deploy a model file (.pllm
) as part of an iOS app, there are a few options:
Include in App Bundle:
- Add model file to your Application's bundle as a resource.
- Keep in mind Apple enforces a maximum size limit, not all models will fit.
Host Externally:
- Host the model file on a server or cloud storage service.
- Download the file from within the app.
Copy to Device (for testing or manual installation):
- Use AirDrop or connect your device via USB and copy your model to the device's storage.
- Access the file programmatically within your app.
Usage
- Create an instance of the inference engine:
- Generate a prompt completion:
- To interrupt completion generation before it has finished:
Demo
For the picoLLM iOS SDK, we offer demo applications that demonstrate how to use it to generate text from a prompt or in a chat-based environment.
Setup
- Clone the picoLLM repository from GitHub using HTTPS:
- Connect an iOS device via USB or launch an iOS device simulator.
Usage
- Install dependencies:
Open the
PicoLLMCompletionDemo.xcworkspace
in XCode.Replace
${YOUR_ACCESS_KEY_HERE}
in ViewController.swift with a validAccessKey
.Airdrop or copy the picoLLM model file (
.pllm
) file to your deployment device.Build and run the app.
For more information on our picoLLM demo for iOS or to see a chat-based demo, head over to our GitHub repository.