picoLLM Inference Engine
Node.js Quick Start
Platforms
- Linux (x86_64)
- macOS (x86_64, arm64)
- Windows (x86_64)
- Raspberry Pi (4, 5)
Requirements
- Picovoice Account & AccessKey
- Node.js 16+
- npm
Picovoice Account & AccessKey
Signup or Login to Picovoice Console to get your AccessKey
.
Make sure to keep your AccessKey
secret.
Quick Start
Setup
Install Node.js.
Install the picollm-node npm package:
- Download a
picoLLM
model file (.pllm
) from Picovoice Console.
Usage
- Create an instance of the engine:
- Generate a prompt completion:
- To interrupt completion generation before it has finished:
- When done, be sure to release the resources explicitly:
Demo
For the picoLLM Node.js SDK, we offer a demo application that demonstrates how to use it to generate text from a prompt or in a chat-based environment.
Setup
Install the picoLLM demo package:
This package installs command-line utilities for the picoLLM Node.js demos.
Usage
Use the --help
flag to see the usage options for the completion demo:
Run the following command to generate text:
For more information on our picoLLM demos for Node.js or to see a chat-based demo, head over to our GitHub repository.