Embedding voice AI into kitchen appliances shouldn’t mean handing data and control to external ecosystems. Cloud assistants can introduce external dependencies and tie products to rigid platforms that dictate UX and certification requirements. They require device cloud accounts, stream customer audio to external servers, and create latency and reliability risks.
Picovoice takes a different approach: AI-powered voice technology that runs entirely on-device. In this tutorial, we use Python with Raspberry Pi to show how voice powered appliance actions, such as turning on devices, adjusting oven temperatures, and setting timers, can be mapped directly into your existing tech stack. The result is a customizable embedded voice layer that is responsive, operates offline, and scales across product lines.
This demo can run on a similar low-cost computer, mobile phone, laptop, desktop, or even within web browsers. Check out Picovoice's voice AI SDKs to add voice AI to your product.
Why Choose On-Device Voice AI for Smart Kitchen Appliances?
For smart kitchen appliances, Picovoice enables:
- Customizable experiences: Define your own wake words, intents, and voice responses for a personalized experience rather than adapting to rigid platforms.
- Seamless Integration: Embed voice AI directly into existing appliance stacks without relying on cloud APIs or re-architecting device firmware.
- Low Latency: Handle speech processing locally, eliminating network latency and keeping performance consistent.
- Privacy by default: Keep all audio processing on-device, to meet strict compliance standards like GDPR.
The Voice AI Stack for Smart Appliances
Wake Word Detection: The kitchen appliance continuously listens for a predefined wake phrase using a wake word engine. Porcupine Wake Word detects the custom phrase and activates the rest of the voice pipeline.
Speech-to-Intent Recognition: Once the wake word is detected, the system identifies what action the user wants the smart appliance to perform. Rhino Speech-to-Intent processes the user's speech and directly maps the user's speech to a predefined intent and its parameters (or "slots"). This enables real-time voice-powered appliance tasks and is 6x more accurate than Big Tech alternatives.
Text-to-Speech: Orca Text-to-Speech provides natural-sounding voice feedback, allowing the appliance to confirm appliance actions such as “Turning oven on”
Step 1: Create a Custom Wake Word for the Kitchen Assistant
- To train the custom wake word model, create an account on the Picovoice Console
- In the Porcupine section of the Console, train the custom wake phrase, such as "Hey Chef".
- Download the keyword file, choosing Raspberry Pi as the target platform.
You will now find a zipped folder in your downloads containing the .ppn file for the custom wake word.
For tips on choosing an effective wake word, refer to this guide: Choosing a wake word
Step 2: Create a Custom Speech-to-Intent model for the AI Kitchen Assistant
- In the Rhino section of the Picovoice console, create an empty Rhino context for the AI voice assistant.
- Click the "Import YAML" button in the top-right corner of the Console. Paste the YAML provided below to add the intents and expressions for the voice assistant.
- Download the context file, choosing Raspberry Pi as the target platform.
You will now find a zipped folder in your downloads containing the .rhn file for the custom context.
You can refer to the Rhino Syntax Cheat Sheet for more details on how to build your custom context.
YAML Context for the Kitchen Assistant:
This context defines four main intents:
- TimerControl: Set and manage kitchen timers
- TemperatureControl: Adjust oven temperature
- ApplianceControl: Turn kitchen devices on/off
- AskState: Ask current appliance status
With these intents, the AI voice assistant can take on appliance-focused tasks. This context lays the groundwork for seamless integration with IoT platforms or embedded systems, enabling smart workflows where multiple devices coordinate together in real time.
To see how Rhino Speech-to-Intent can be applied beyond appliances, take a look at the AI Cooking Companion in Python
Step 3: Build the Smart Kitchen Voice Control Pipeline in Python
With the wake word and Rhino context ready, let's write the Python code that brings everything together.
Initialize the Voice AI Engines and Audio I/O
Detect the Custom Wake Word
Porcupine Wake Word continuously processes the microphone input to detect the predefined wake phrase. When the phrase is recognized, the system triggers the inference pipeline and generates an acknowledgment through Orca Text-to-Speech.
Run On-Device Intent Recognition
Once activated, Rhino Speech-to-Intent processes the incoming audio in real time. When the user finishes speaking, it returns a structured inference containing the intent and any relevant slots.
Execute Appliance Actions from User Intents
The execute_appliance_action function converts Rhino’s inference output into actions for the target device and generates a text response for playback.
Generate Voice Feedback
Orca Text-to-Speech synthesizes the response text into natural-sounding audio, completing the embedded conversational loop.
Complete Python Script
Here's the complete Python script integrating all components into an embedded voice AI stack for smart kitchen appliances:
Step 4: Run the Voice Stack on Raspberry Pi
With the script and model files ready, it’s time to deploy the embedded voice AI pipeline on a Raspberry Pi. For this demo, we used a Raspberry Pi 4 with a USB microphone and Bluetooth speaker for input and output. The same setup can run on laptops, web, or other single-board computers.
Transfer your project to the Raspberry Pi
Transfer your script and model files to the Raspberry Pi with scp. Replace <your-pi-ip-address> with your Pi’s address and add the path to your project folder:
Connect and prepare the Raspberry Pi
Make sure your Raspberry Pi is connected to a microphone and speaker so it can capture voice and play spoken responses. You can log in and launch the application. Run the following on the Raspberry Pi’s terminal:
Install Dependencies on Raspberry Pi
Install all required Python SDKs and supporting libraries:
- Porcupine Wake Word Python SDK
pvporcupine, - Rhino Speech-to-Intent Python SDK
pvrhino, - Orca Text-to-Speech Python SDK
pvorca, - Picovoice Python Recorder library
pvrecorder, - Picovoice Python Speaker library
pvspeaker.
Run the Smart Appliance Stack
You will need the Picovoice AccessKey to use the SDKs. Copy it from the Picovoice Console.
Run the following commands in the Raspberry Pi’s terminal. Replace the placeholder values with your actual ACCESS_KEY, and the file paths to your models. The audio device indices are optional and will use defaults if not specified.
The smart kitchen appliance is now ready and listening for a wake word.
Extending the Smart Kitchen Voice Assistant
Once the on-device voice assistant is running, here are some ways you can expand it:
- Integrate with Smart Kitchen IoT Devices: Move beyond the Raspberry Pi by connecting the AI voice assistant to real-world smart kitchen appliances.
- Multiple Languages: Picovoice engines support a wide range of languages, allowing you to adapt the voice assistant for different regions.
- Advanced Conversational Flows: Implement more complex dialogue management for multi-turn interactions.







