Real-time Transcription Benchmark
Real-time transcription is one of the most known and used speech AI technologies. It enables several applications that require immediate visual feedback, such as dictation (voice typing), voice assistants, or closed captions of virtual events and meetings. Finding the best real-time transcription engine can be challenging. A good real-time transcription engine should convert speech to text accurately and with minimal delay.
Real-time transcription solutions achieving state-of-the-art accuracy often run in the cloud. Currently, Amazon Transcribe, Azure Speech-to-Text, Google Speech-to-Text, and IBM Watson Speech-to-Text are the dominant real-time transcription API alternatives. Running real-time transcription in the cloud requires data to be processed in remote servers, which introduces network latency and unreliable response time. Hence, cloud dependency can fail applications to provide immediate feedback - whether the issue is on the users’ or cloud providers’ side, or both.
On-device real-time transcription solutions offer reliable real time experiences by removing the inherent limitations of cloud computing, i.e., variable delay induced by network connectivity. However, running transcription locally with minimal resource requirements or without sacrificing accuracy is challenging. Thus, there are not many on-device transcription solutions. Currently, OpenAI Whisper is the most popular one with smaller model sizes such as Whisper Tiny, Whisper Small, and Whisper Base. Yet, running real-time transcription is even more challenging. OpenAI Whisper does not support real-time transcription. There is no well-known on-device real-time transcription alternative that is developer-friendly and achieves state-of-the-art accuracy.
Cheetah Streaming Speech-to-Text is an extremely efficient on-device real-time transcription engine that achieves file-based cloud transcription API accuracy by processing voice data locally and in real time. Below is a series of benchmarks to back our claims. Due to the lack of real-time transcription capabilities of Whisper models, we used file-based transcription engines of cloud providers for consistency.
[PS: Have you noticed that some applications, such as Apple’s Siri, update the visual feedback after the initial transcription? File-based transcription engines generally achieve higher accuracy than real-time transcription engines. Similar to humans, machines transcribe more accurately when they have more time to listen and gather context. Thus, some applications use streaming speech-to-text, such as Cheetah, and file-based speech-to-text, such as Leopard, together.]
Methodology
Speech Corpus
We use the following datasets for benchmarks:
- LibriSpeech
test-clean
- LibriSpeech
test-other
- Common Voice
test
- TED-LIUM
test
Metrics
Word Error Rate (WER)
Word error rate is the ratio of edit distance between words in a reference transcript and the words in the output of the speech-to-text engine to the number of words in the reference transcript.
Core-Hour
The Core-Hour metric is used to evaluate the computational efficiency of the speech-to-text engine, indicating the number of CPU hours required to process one hour of audio. A speech-to-text engine with a lower Core-Hour is more computationally efficient. We omit this metric for cloud-based engines.
Results
Accuracy
The figure below shows the accuracy of each engine averaged over all datasets.
Core Hour
The figure below shows the resource requirement of each engine.
Please note that we ran the benchmark across the entire TED-LIUM dataset on an Ubuntu 22.04 machine with AMD CPU (AMD Ryzen 9 5900X (12) @ 3.70GHz), 64 GB of RAM, and NVMe storage, using 10 cores simultaneously and recorded the processing time to obtain the results below. Different datasets and platforms affect the Core-Hour. However, one can expect the same ratio among engines if everything else is the same. For example, Whisper Tiny requires 3x more resources or takes 3x more time compared to Picovoice Leopard.
Usage
The data and code used to create this benchmark are available on GitHub under the permissive Apache 2.0 license. Detailed instructions for benchmarking individual engines are provided in the following documents: