Picovoice WordmarkPicovoice Console
Introduction
Introduction
AndroidC.NETFlutterlink to GoiOSJavaNvidia JetsonLinuxmacOSNodejsPythonRaspberry PiReact NativeRustWebWindows
AndroidC.NETFlutterlink to GoiOSJavaNodejsPythonReact NativeRustWeb
SummaryPicovoice LeopardAmazon TranscribeAzure Speech-to-TextGoogle ASRGoogle ASR (Enhanced)IBM Watson Speech-to-Text
FAQ
Introduction
AndroidC.NETFlutterlink to GoiOSJavaNodejsPythonReact NativeRustWeb
AndroidC.NETFlutterlink to GoiOSJavaNodejsPythonReact NativeRustWeb
FAQ
Introduction
AndroidCiOSLinuxmacOSPythonWebWindows
AndroidCiOSPythonWeb
SummaryOctopus Speech-to-IndexGoogle Speech-to-TextMozilla DeepSpeech
FAQ
Introduction
AndroidAngularArduinoBeagleBoneCChrome.NETEdgeFirefoxFlutterlink to GoiOSJavaNvidia JetsonLinuxmacOSMicrocontrollerNodejsPythonRaspberry PiReactReact NativeRustSafariUnityVueWebWindows
AndroidAngularC.NETFlutterlink to GoiOSJavaMicrocontrollerNodejsPythonReactReact NativeRustUnityVueWeb
SummaryPorcupineSnowboyPocketSphinx
Wake Word TipsFAQ
Introduction
AndroidAngularBeagleBoneCChrome.NETEdgeFirefoxFlutterlink to GoiOSJavaNvidia JetsonlinuxmacOSNodejsPythonRaspberry PiReactReact NativeRustSafariUnityVueWebWindows
AndroidAngularC.NETFlutterlink to GoiOSJavaNodejsPythonReactReact NativeRustUnityVueWeb
SummaryPicovoice RhinoGoogle DialogflowAmazon LexIBM WatsonMicrosoft LUIS
Expression SyntaxFAQ
Introduction
AndroidBeagleboneCiOSNvidia JetsonLinuxmacOSPythonRaspberry PiRustWebWindows
AndroidCiOSPythonRustWeb
SummaryPicovoice CobraWebRTC VAD
FAQ
Introduction
AndroidAngularArduinoBeagleBoneC.NETFlutterlink to GoiOSJavaNvidia JetsonMicrocontrollerNodejsPythonRaspberry PiReactReact NativeRustUnityVueWeb
AndroidAngularCMicrocontroller.NETFlutterlink to GoiOSJavaNodejsPythonReactReact NativeRustUnityVueWeb
Picovoice SDK - FAQ
IntroductionSTM32F407G-DISC1 (Arm Cortex-M4)STM32F411E-DISCO (Arm Cortex-M4)STM32F769I-DISCO (Arm Cortex-M7)IMXRT1050-EVKB (Arm Cortex-M7)
FAQGlossary

Rhino Speech-to-Intent
Angular Quick Start


Platforms

  • Chrome & Chromium-based browsers
  • Edge
  • Firefox
  • Safari

Requirements

  • Picovoice Account and AccessKey
  • Node.js 14+
  • Angular 13+
  • npm

Picovoice Account & AccessKey

Signup or Login to Picovoice Console to get your AccessKey. Make sure to keep your AccessKey secret.

Quick Start

Setup

  1. Install Node.js.

  2. Install the npm packages:

    • @picovoice/rhino-angular
    • @picovoice/web-voice-processor
npm install @picovoice/rhino-angular @picovoice/web-voice-processor

Usage

To initialize Rhino you'll need a Rhino context file (.rhn) as well as a model file (.pv). Place these files in the project's public directory or generate a base64 representation of the file using the built-in script:

npx pvbase64 -i ${RHINO_INIT_FILE} -o ${BASE64_OUTPUT_FILE}

Pass the path to the file (relative to the public directory) or use the base64 string:

const rhinoFile = {
publicPath: "${FILE_RELATIVE_PATH}",
// or
base64: "${FILE_BASE64_STRING}",
}

Add the RhinoService that infers intent from spoken commands within a given context to an Angular component:

import { Subscription } from "rxjs"
import { RhinoService } from "@picovoice/rhino-angular"
const contextFileBase64 = // base64 of Rhino context file (.rhn)
const modelFileBase64 = // base64 of Rhino model file (.pv)
export class VoiceWidget {
private contextInfoSubscription: Subscription
private inferenceSubscription: Subscription
private isLoadedSubscription: Subscription
private isListeningSubscription: Subscription
private errorSubscription: Subscription
constructor(private rhinoService: RhinoService) {
this.contextInfoSubscription = rhinoService.contextInfo$.subscribe(
contextInfo => {
console.log(contextInfo);
});
this.inferenceSubscription = rhinoService.inference$.subscribe(
inference => {
console.log(inference);
});
this.isLoadedSubscription = porcupineService.isLoaded$.subscribe(
isLoaded => {
console.log(isLoaded);
});
this.isListeningSubscription = porcupineService.isListening$.subscribe(
isListening => {
console.log(isListening);
});
this.errorSubscription = porcupineService.error$.subscribe(
error => {
console.error(error);
});
}
async ngOnInit() {
await this.rhinoService.init(
${ACCESS_KEY},
{ base64: contextFileBase64 },
{ base64: modelFileBase64 },
)
}
async process() {
await this.rhinoService.process();
}
ngOnDestroy() {
this.contextInfoSubscription.unsubscribe();
this.inferenceSubscription.unsubscribe();
this.isLoadedSubscription.unsubscribe();
this.isListeningSubscription.unsubscribe();
this.errorSubscription.unsubscribe();
this.rhinoService.release();
}
}

To start detecting an inference, run the process function:

await this.rhinoService.process();

The process function initializes WebVoiceProcessor. Rhino will then listen and process frames of microphone audio until it reaches a conclusion, then update the inference subscription with the latest result. Once a conclusion is reached Rhino will enter a paused state. From the paused state Rhino call process again to detect another inference.

Custom Contexts

Create custom contexts in the Picovoice Console using the Rhino Grammar. Train and download a Rhino context file (.rhn) for the target platform Web (WASM). This model file can be used directly with publicPath, but, if base64 is preferable, convert the .rhn file to a base64 JavaScript variable using the built-in pvbase64 script:

npx pvbase64 -i ${CONTEXT_FILE}.rhn -o ${CONTEXT_BASE64}.js -n ${CONTEXT_BASE64_VAR_NAME}

Similar to the model file (.pv), context files (.rhn) are saved in IndexedDB to be used by Web Assembly. Either base64 or publicPath must be set for the context to instantiate Rhino. If both are set, Rhino will use the base64 model.

const contextModel = {
publicPath: "${CONTEXT_RELATIVE_PATH}",
// or
base64: "${CONTEXT_BASE64_STRING}",
}

Switching Languages

In order to use Rhino with different languages you need to use the corresponding model file (.pv) for the desired language. The model files for all supported languages are available in the Rhino GitHub repository.

Demo

For the Rhino Angular SDK, there is a Angular demo project available on the Rhino GitHub repository.

Setup

Clone the Rhino repository from GitHub:

git clone --recurse-submodules https://github.com/Picovoice/rhino.git

Usage

  1. Install dependencies and run:
cd rhino/demo/angular
npm install
npm run start
  1. Open http://localhost:4200 to view it in the browser.

Resources

Packages

  • @picovoice/rhino-angular on the npm registry
  • @picovoice/web-voice-processor on the npm registry

API

  • @picovoice/rhino-angular API Docs

GitHub

  • Rhino Angular SDK on GitHub
  • Rhino Angular Demo on GitHub

Benchmark

  • Speech-to-Intent Benchmark

Further Reading

  • Voice-enabling an Angular App with Wake Words

Was this doc helpful?

Issue with this doc?

Report a GitHub Issue
Rhino Speech-to-Intent Angular Quick Start
  • Platforms
  • Requirements
  • Picovoice Account & AccessKey
  • Quick Start
  • Setup
  • Usage
  • Custom Contexts
  • Switching Languages
  • Demo
  • Setup
  • Usage
  • Resources
Platform
  • Leopard Speech-to-Text
  • Cheetah Streaming Speech-to-Text
  • Octopus Speech-to-Index
  • Porcupine Wake Word
  • Rhino Speech-to-Intent
  • Cobra Voice Activity Detection
Resources
  • Docs
  • Console
  • Blog
  • Demos
Sales
  • Pricing
  • Starter Tier
  • Enterprise
Company
  • Careers
Follow Picovoice
  • LinkedIn
  • GitHub
  • Twitter
  • Medium
  • YouTube
  • AngelList
Subscribe to our newsletter
Terms of Use
Privacy Policy
© 2019-2022 Picovoice Inc.