picoLLM Inference Engine
Web Quick Start
Platforms
- Chrome & Chromium-based browsers
- Edge
- Firefox
- Safari
Requirements
- Picovoice Account and AccessKey
- Node.js 16+
- npm
Picovoice Account & AccessKey
Signup or Login to Picovoice Console to get your AccessKey
.
Make sure to keep your AccessKey
secret.
Quick Start
Setup
Install Node.js.
Install the picoLLM Web package:
- Download a
picoLLM
model file (.pllm
) from Picovoice Console.
Usage
- Either create an HTML input tag that accepts the
.pllm
model file:
or put the model file in a web server or public directory:
- Create a
picoLLM
instance usingPicoLLMWorker
and the model file from above:
Replace ${ACCESS_KEY}
with yours obtained from Picovoice Console.
- Generate prompt completion:
- To interrupt completion generation before it has finished:
- Release resources explicitly when done with
picoLLM
:
Demo
For the picoLLM Web SDK, we offer demo applications that demonstrate how to use it to generate text from a prompt or in a chat-based environment.
Setup
Clone the picoLLM repository from GitHub:
Usage
- Install dependencies and run:
- Open http://localhost:5000 to view it in the browser.
For more information on our picoLLM demos for Web or to see a chat-based demo, head over to our GitHub repository.