vueRhino - Vue API

  • NPM
  • NLU
  • Vue

This document outlines how to integrate the Rhino wake word engine within an application using its Vue API.

Requirements

  • yarn (or npm)
  • Secure browser context (i.e. HTTPS or localhost)

Compatibility

  • Chrome, Edge
  • Firefox
  • Safari

The Picovoice SDKs for Web are powered by WebAssembly (WASM), the Web Audio API, and Web Workers. All audio processing is performed in-browser, providing intrinsic privacy and reliability.

All modern browsers are supported, including on mobile. Internet Explorer is not supported.

Using the Web Audio API requires a secure context (HTTPS connection), with the exception of localhost, for local development.

Installation

Use npm or yarn to install the package and its peer dependencies. Each spoken language (e.g. 'en', 'de') is a separate package. For this example we'll use English:

yarn add @picovoice/rhino-web-vue @picovoice/rhino-web-en-worker @picovoice/web-voice-processor

(or)

npm install @picovoice/rhino-web-vue @picovoice/rhino-web-en-worker @picovoice/web-voice-processor

Language-specific Rhino web worker packages

These worker packages are compatible with the Vue SDK:

Usage

The Rhino SDK for Vue is based on the Rhino SDK for Web. The library provides a renderless Vue component: Rhino. The component will take care of microphone access and audio downsampling (via @picovoice/web-voice-processor) and provide a wake word detection event to which your parent component can listen.

The Rhino library is by default a "push-to-talk" experience. You can use a button to trigger the isTalking state. Rhino will listen and process frames of microphone audio until it reaches a conclusion. If the utterance matched something in your Rhino context (e.g. "make me a coffee" in a coffee maker context), the details of the inference are returned.

Parameters

The Rhino component has two main parameters:

  1. The rhinoWorkerFactory (language-specific, imported as RhinoWorkerFactory from the @picovoice/rhino-web-xx-worker series of packages, where xx is the two-letter language code). We inject this dependency so that the Rhino Vue component can remain language-agnostic.
  2. The rhinoFactoryArgs (i.e. what specific context we want Rhino to understand)

Provide a Rhino context via rhinoFactoryArgs:

export type RhinoContext = {
/** Base64 representation of a trained Rhino context (`.rhn` file) */
base64: string
/** Value in range [0,1] that trades off miss rate for false alarm */
sensitivity?: number
}

The Rhino component emits four events. The main event of interest is rhn-inference, emitted when Rhino concludes an inference (whether it was understood or not). The rhn-inference event provides a RhinoInference object:

export type RhinoInference = {
/** Rhino has concluded the inference (isUnderstood is now set) */
isFinalized: boolean
/** The intent was understood (it matched an expression in the context) */
isUnderstood?: boolean
/** The name of the intent */
intent?: string
/** Map of the slot variables and values extracted from the utterance */
slots?: Record<string, string>
}

Make sure you handle the possibility of errors with the rhn-error event. Users may not have a working microphone, and they can always decline (and revoke) permissions; your application code should anticipate these scenarios. You also want to ensure that your UI waits until rhn-loaded is complete before instructing them to use VUI features (i.e. the "Push to Talk" button should be disabled until this event occurs).

<template>
<div class="voice-widget">
<Rhino
ref="rhino"
v-bind:rhinoFactoryArgs="{
context: {
base64: '...', <!-- Base64 representation of a trained Rhino context; i.e. a `.rhn` file, omitted for brevity -->
},
}"
v-bind:rhinoFactory="factory"
v-on:rhn-error="rhnErrorFn"
v-on:rhn-inference="rhnInferenceFn"
v-on:rhn-init="rhnInitFn"
v-on:rhn-ready="rhnReadyFn"
/>
</div>
</template>
<script>
import Rhino from "@picovoice/rhino-web-vue"
import { RhinoWorkerFactory as RhinoWorkerFactoryEn } from "@picovoice/rhino-web-en-worker"
export default {
name: "VoiceWidget",
components: {
Rhino,
},
data: function () {
return {
inference: null,
isError: false,
isLoaded: false,
isListening: false,
isTalking: false,
factory: RhinoWorkerFactoryEn,
}
},
methods: {
pushToTalk: function () {
if (this.$refs.rhino.pushToTalk()) {
this.isTalking = true
}
},
rhnInitFn: function () {
this.isError = false
},
rhnReadyFn: function () {
this.isLoaded = true
this.isListening = true
},
rhnInferenceFn: function (inference) {
this.inference = inference
console.log("Rhino inference: " + inference)
this.isTalking = false
},
rhnErrorFn: function (error) {
this.isError = true
this.errorMessage = error.toString()
},
},
}
</script>

Events

The Rhino component will emit the following events:

EventDataDescription
"rhn-loading"Rhino has begun loading
"rhn-ready"Rhino has finished loading & the user has granted microphone permission: ready to process voice
"rhn-inference"The inference object (see above for examples)Rhino has concluded the inference.
"rhn-error"The error that was caught (e.g. "NotAllowedError: Permission denied")An error occurred while Rhino or the WebVoiceProcessor was loading

Issue with this doc? Please let us know.