picoLLM Inference Engine
.NET Quick Start
Platforms
- Linux (x86_64)
- macOS (x86_64, arm64)
- Windows (x86_64, arm64)
- Raspberry Pi (4, 5)
Requirements
.NET Framework 4.6.1+ / .NET Standard 2.0+ / .NET Core 3.0+:
- Windows (x86_64)
.NET Standard 2.0+ / .NET Core 3.0+:
- macOS (x86_64)
.NET 6.0+:
- macOS (arm64)
- Windows (arm64)
- Linux (x86_64)
- Raspberry Pi (4, 5)
Picovoice Account & AccessKey
Signup or Login to Picovoice Console to get your AccessKey
.
Make sure to keep your AccessKey
secret.
Quick Start
Setup
Install .NET.
Install the picoLLM Inference Engine NuGet package in Visual Studio or using the .NET CLI:
- Download a
picoLLM
model file (.pllm
) from Picovoice Console.
Usage
- Create an instance of the inference engine:
- Generate a prompt completion:
- To interrupt completion generation before it has finished:
PicoLLM
will have its resources freed by the garbage collector, but to have resources freed immediately after use, wrap it in a using statement or call.Dispose()
directly:
Demo
For the picoLLM .NET SDK, we offer demo applications that demonstrate how to use it to generate text from a prompt or in a chat-based environment.
Setup
- Clone the picoLLM Inference Engine repository from GitHub:
- Build the demo:
Usage
Run the demo by entering the following in the terminal:
Replace ${ACCESS_KEY}
with yours obtained from Picovoice Console, ${MODEL_PATH}
with the path to a model file
downloaded from Picovoice Console, and ${PROMPT}
with a prompt string.
To get information about all the available options in the demo, run the following:
For more information on our picoLLM demos for .NET or to see a chat-based demo, head over to our GitHub repository.