csharpPicovoice Platform — .NET Quick Start

  • End-to-End Voice Platform
  • Offline Voice Recognition
  • Local Speech Recognition
  • Speech-to-Intent
  • Domain-Specific NLU
  • Wake Word Detection
  • Linux
  • macOS
  • Windows
  • .NET
  • C#


  • .NET Core 3.1
  • OpenAL


  • Linux (x86_64)
  • macOS (x86_64)
  • Windows (x86_64)
  • Raspberry Pi

Cloning the Repository

If using SSH, clone the repository with:

git clone --recurse-submodules [email protected]:Picovoice/picovoice.git

If using HTTPS, then type:

git clone --recurse-submodules https://github.com/Picovoice/picovoice.git


Both demos use Microsoft's .NET Core framework.

The Microphone Demo uses OpenAL. You must install this before running the demo.

On Windows, install using the OpenAL Windows Installer.

On Linux, use apt-get:

sudo apt-get install libopenal-dev

On macOS, use brew:

brew install openal-soft

Once .NET Core and OpenAL have been installed, you can build with the dotnet CLI:

dotnet build -c MicDemo.Release
dotnet build -c FileDemo.Release


NOTE: the working directory for all dotnet commands is:


File Demo

The file demo uses Picovoice to scan for keywords and commands in an audio file. The demo is mainly useful for quantitative performance benchmarking against a corpus of audio data.

Picovoice processes a 16kHz, single-channel audio stream. If a stereo file is provided it only processes the first (left) channel.

The following processes a file looking for instances of the wake phrase defined in the file located at ${PATH_TO_PORCUPINE_KEYWORD_FILE} and infers spoken commands using the context defined by the file located at ${PATH_TO_RHINO_CONTEXT_FILE)}:

dotnet run -c FileDemo.Release -- \
--input_audio_path ${PATH_TO_INPUT_AUDIO_FILE} \
--context_path ${PATH_TO_RHINO_CONTEXT_FILE)}

Microphone Demo

This demo opens an audio stream from a microphone and detects utterances of a given wake word and commands within a given context. The following processes incoming audio from the microphone for instances of the wake phrase defined in the file located at ${PATH_TO_PORCUPINE_KEYWORD_FILE} and then infers the follow-on spoken command using the context defined by the file located at ${PATH_TO_RHINO_CONTEXT_FILE)}:

dotnet run -c MicDemo.Release -- \
--context_path ${PATH_TO_RHINO_CONTEXT_FILE)}

It is possible that the default audio input device recognized by PyAudio is not the one being used. There are a couple of debugging facilities baked into the demo application to solve this. First, type the following into the console:

dotnet run -c MicDemo.Release -- --show_audio_devices

It provides information about various audio input devices on the box. On a Windows PC, this is the output:

Available input devices:
Device 0: Microphone Array (Realtek(R) Au
Device 1: Microphone Headset USB

You can use the device index to specify which microphone to use for the demo. For instance, if you want to use the Headset microphone in the above example, you can invoke the demo application as below:

dotnet run -c MicDemo.Release -- \
--context_path ${PATH_TO_RHINO_CONTEXT_FILE)} \
--audio_device_index 1

If the problem persists we suggest storing the recorded audio into a file for inspection. This can be achieved with:

dotnet run -c MicDemo.Release -- \
--context_path ${PATH_TO_RHINO_CONTEXT_FILE)} \
--audio_device_index 1
--output_path ./test.wav

If after listening to stored file there is no apparent problem detected please open an issue.

Create Custom Wake Words & Contexts

You can create custom Porcupine wake word and Rhino context models using Picovoice Console

Issue with this doc? Please let us know.