Picovoice Wordmark
Start Free
Introduction
Introduction
AndroidC.NETiOSLinuxmacOSNode.jsPythonRaspberry PiWebWindows
AndroidC.NETiOSNode.jsPythonWeb
SummaryPicovoice picoLLMGPTQ
Introduction
AndroidCiOSLinuxmacOSPythonRaspberry PiWebWindows
AndroidCiOSPythonWeb
Introduction
AndroidC.NETFlutteriOSJavaLinuxmacOSNode.jsPythonRaspberry PiReactReact NativeWebWindows
AndroidC.NETFlutteriOSJavaNode.jsPythonReactReact NativeWeb
SummaryPicovoice LeopardAmazon TranscribeAzure Speech-to-TextGoogle ASRGoogle ASR (Enhanced)IBM Watson Speech-to-TextWhisper Speech-to-Text
FAQ
Introduction
AndroidC.NETFlutteriOSJavaLinuxmacOSNode.jsPythonRaspberry PiReactReact NativeWebWindows
AndroidC.NETFlutteriOSJavaNode.jsPythonReactReact NativeWeb
SummaryPicovoice CheetahAzure Real-Time Speech-to-TextAmazon Transcribe StreamingGoogle Streaming ASRMoonshine StreamingVosk StreamingWhisper.cpp Streaming
FAQ
Introduction
AndroidC.NETiOSLinuxmacOSNode.jsPythonRaspberry PiWebWindows
AndroidC.NETiOSNode.jsPythonWeb
SummaryAmazon PollyAzure TTSElevenLabsOpenAI TTSPicovoice OrcaChatterbox-TTS-TurboKokoro-TTSKitten-TTS-Nano-0.8-INT8Pocket-TTSNeu-TTS-Nano-Q4-GGUFPiper-TTSSoprano-TTSSupertonic-TTS-2ESpeak-NG
Introduction
AndroidCiOSLinuxmacOSPythonRaspberry PiWebWindows
AndroidCiOSPythonWeb
SummaryPicovoice KoalaMozilla RNNoise
Introduction
AndroidCiOSLinuxmacOSNode.jsPythonRaspberry PiWebWindows
AndroidCNode.jsPythoniOSWeb
SummaryPicovoice EaglepyannoteSpeechBrain
Introduction
AndroidCiOSLinuxmacOSPythonRaspberry PiWebWindows
AndroidCiOSPythonWeb
SummaryPicovoice FalconAmazon TranscribeAzure Speech-to-TextGoogle Speech-to-Textpyannote
Introduction
AndroidArduinoCChrome.NETEdgeFirefoxFlutteriOSJavaLinuxmacOSMicrocontrollerNode.jsPythonRaspberry PiReactReact NativeSafariWebWindows
AndroidC.NETFlutteriOSJavaMicrocontrollerNode.jsPythonReactReact NativeWeb
SummaryPicovoice PorcupineSnowboyPocketSphinx
Wake Word TipsFAQ
Introduction
AndroidArduinoCChrome.NETEdgeFirefoxFlutteriOSJavaLinuxmacOSMicrocontrollerNode.jsPythonRaspberry PiReactReact NativeSafariWebWindows
AndroidC.NETFlutteriOSJavaMicrocontrollerNode.jsPythonReactReact NativeWeb
SummaryPicovoice RhinoGoogle DialogflowAmazon LexIBM WatsonMicrosoft LUIS
Expression SyntaxFAQ
Introduction
AndroidArduinoC.NETiOSLinuxmacOSMicrocontrollerNode.jsPythonRaspberry PiWebWindows
AndroidC.NETiOSMicrocontrollerNode.jsPythonWeb
SummaryPicovoice CobraWebRTC VADSilero VAD
FAQ
Introduction
AndroidCiOSLinuxmacOSPythonRaspberry PiWebWindows
AndroidCiOSPythonWeb
Introduction
AndroidC.NETFlutteriOSNode.jsPythonReact NativeWeb
AndroidC.NETFlutteriOSNode.jsPythonReact NativeWeb
Introduction
C.NETNode.jsPython
C.NETNode.jsPython
FAQGlossary

picoLLM Inference Engine
C Quick Start

Platforms

  • Linux (x86_64)
  • macOS (x86_64, arm64)
  • Windows (x86_64, arm64)
  • Raspberry Pi (4, 5)

Requirements

  • C99-compatible compiler
  • CMake (3.13+)
  • For Windows Only: MinGW is required to build the demo

Picovoice Account & AccessKey

Signup or Login to Picovoice Console to get your AccessKey. Make sure to keep your AccessKey secret.

Quick Start

Setup

  1. Clone the repository:
git clone https://github.com/Picovoice/picollm.git
  1. Download a picoLLM model file (.pllm) from Picovoice Console.

Usage

  1. Include the public header files (picovoice.h and pv_picollm.h).
  2. Link the project to an appropriate precompiled library for the target platform and load it.
  3. Create an instance of the engine:
pv_picollm_t *pllm = NULL;
pv_status_t status = pv_picollm_init(
"${ACCESS_KEY}",
"${MODEL_PATH}",
"best",
&pllm);
if (status != PV_STATUS_SUCCESS) {
// error handling logic
}
  1. Generate a prompt completion:
pv_picollm_usage_t usage;
pv_picollm_endpoint_t endpoint;
int32_t num_completion_tokens;
pv_picollm_completion_token_t *completion_tokens;
char *output;
pv_picollm_generate(
pllm,
"${PROMPT}",
-1, // completion_token_limit
NULL, // stop_phrases
0, // num_stop_phrases
-1, // seed
0.f, // presence_penalty
0.f, // frequency_penalty
0.f, // temperature
1.f, // top_p
0, // num_top_choices
NULL, // stream_callback
NULL, // stream_callback_context
&usage,
&endpoint,
&completion_tokens,
&num_completion_tokens,
&output);
if (status != PV_STATUS_SUCCESS) {
// error handling logic
}
  1. To interrupt completion generation before it has finished:
pv_picollm_interrupt(object);
  1. Release resources explicitly when done with picoLLM:
// after each call to pv_picollm_generate
pv_picollm_delete_completion_tokens(completion_tokens, num_completion_tokens);
pv_picollm_delete_completion(output);
// once done with picoLLM
pv_picollm_delete(pllm);

Demo

For the picoLLM C SDK, we offer a demo application that demonstrates how to use it to generate text from a prompt.

Setup

  1. Clone the picoLLM repository from GitHub using HTTPS:
git clone https://github.com/Picovoice/picollm.git
  1. Build the demo:
cd picollm
cmake -S demo/c/. -B demo/c/build
cmake --build demo/c/build --target picollm_demo_completion

Usage

To see the usage options for the demo:

./demo/c/build/picollm_demo_completion

Run the command corresponding to your platform from the root of the repository:

./demo/c/build/picollm_demo_completion \
-a ${ACCESS_KEY} \
-m ${MODEL_PATH} \
-l ${LIBRARY_PATH} \
-p ${PROMPT}

For more information on our picoLLM demo for C, head over to our GitHub repository.

Resources

API

  • C API Docs

GitHub

  • picoLLM C demo on GitHub

Was this doc helpful?

Issue with this doc?

Report a GitHub Issue
picoLLM Inference Engine C Quick Start
  • Platforms
  • Requirements
  • Picovoice Account & AccessKey
  • Quick Start
  • Setup
  • Usage
  • Demo
  • Setup
  • Usage
  • Resources
© 2019-2026 Picovoice Inc.PrivacyTerms