Falcon Speaker Diarization

Tag speakers in conversations,
Find “who spoke when”

On-device speaker diarization, enabling machines and humans to read and analyze transcripts without sacrificing privacy

Press the button
to start diarizing with Falcon
Trusted by thousands of enterprises - from startups to Fortune 500s
Loved by 50,000+ developers
OpenAI

What is Falcon Speaker Diarization?

Falcon Speaker Diarization identifies speakers in an audio stream by finding speaker change points and grouping speech segments based on speaker voice characteristics.

Powered by deep learning, Falcon Speaker Diarization enables machines and humans to read and analyze conversation transcripts created by Speech-to-Text APIs or SDKs.

Identify “who spoke when” with a few lines of code

f = pvfalcon.create(access_key)
segments = f.process_file(path)

Why Falcon Speaker Diarization?

Speaker Diarization often works with specific Speech-to-Text APIs or runs on certain platforms, limiting options for developers.

Falcon Speaker Diarization is the only modular and cross-platform Speaker Diarization software that works with any Speech-to-Text engine. Falcon Speaker Diarization processes speech data locally without sending it to remote servers, respecting privacy.

Make transcripts easy to read and analyze

Identify speakers in conversations

  • 🚀
    Ready in Seconds
  • 🔒
    Private
  • ♾️
    Uncapped number of speakers
  • Accurate
  • ⚙️
    Engine-agnostic
  • 🪶
    Lightweight
  • 🤸
    Cross-platform
  • 🌍
    Multilingual
Scientifically-Proven Accuracy

Compare Speaker Diarization models transparently

Most speech-to-text APIs position Speaker Diarization as a feature and do not even mention its accuracy. Picovoice published an Open-source Speaker Diarization Benchmark to enable informed decisions.

Speech-to-Text-Agnostic

Pair up with any transcription engine

Most Speaker Diarization solutions only work with the transcription engines that are integrated. Falcon Speaker Diarization runs with any transcription engine, including OpenAI Whisper, Google Speech-to-Text, Amazon Transcribe, Azure Speech-to-Text, and IBM Watson Speech-to-Text, giving developers flexibility.

Lightweight

SOTA with 100x less compute and memory

Falcon Speaker Diarization requires much less than the alternatives to achieve SOTA (state-of-the-art). Utilize existing hardware, minimize compute costs, and save the environment!

Learn more about

Falcon Speaker Diarization

What is Speaker Diarization?

Speaker Diarization deals with identifying “who spoke when”. Speaker Diarization splits an audio stream that contains human speech into homogeneous segments using speaker voice characteristics, then associates each with individual speakers.

What are the steps in Speaker Diarization?

Speaker Diarization consists of two main steps: speaker segmentation and speaker clustering. Speaker segmentation focuses on finding speaker change points in an audio stream. Clustering groups speech segments together based on speakers’ voice characteristics.

How does Speaker Diarization differ from Speech-to-Text?

Speech-to-Text deals with “what is said.” It converts speech into text without distinguishing speakers, i.e., “who?”. Speech-to-text with timestamps also includes timing information, i.e., “when”.

Speaker Diarization differentiates speakers, answering “who spoke, when” without analyzing “what’s said.” Thus, developers use Speech-to-Text and Speaker Diarization together to identify “who said what and when.”

In short, Speaker Diarization and Speech-to-Text are complementary speech-processing technologies. Speaker Diarization enhances the Speech-to-Text transcripts for conversations where multiple speakers are involved. The transcription result tags each word with a number assigned to individual speakers. A transcription result can include numbers up to as many speakers as Speaker Diarization can uniquely identify in the audio sample.

Leopard Speech-to-Text and Cheetah Streaming Speech-to-Text are Picovoice’s Speech-to-Text engines. Leopard Speech-to-Text is ideal for batch audio transcription, while Cheetah Streaming Speech-to-Text is for real-time transcription.

How does Speaker Diarization differ from Speaker Recognition?

Speaker Diarization and Speaker Recognition are similar but different technologies enabling different use cases. Both identify speakers by analyzing the voice characteristics of speakers. Speaker recognition identifies “known” speakers, whereas Speaker Diarization differentiates speakers without knowing who they are. Speaker Recognition returns recorded names of the enrolled speakers, such as Jane and Joe. Speaker Recognition cannot identify speakers without enrolled voice prints. Speaker Diarization, on the other hand, returns labels such as Speaker 1 and Speaker 2 without requiring speakers’ voice prints. Speaker Diarization does not transfer information between audio files, meaning a speaker can be Speaker 1 in one file and Speaker 2 in another.

In short, Speaker Recognition can verify speakers, whereas Speaker Diarization does not match voice characteristics to verify speakers. Check out Eagle Speaker Recognition and its web demo to learn more about speaker recognition.

What can I build with Speaker Diarization?

Enterprises, from medical and legal practices to call centers, leverage audio transcription to transcribe calls, meetings, and conversations. Speaker Diarization plays a critical role by improving the readability of transcripts and enabling further analysis. Some industries benefit from Speaker Diarization:

What is engine-agnostic Speaker Diarization?

Most vendors offer Speaker Diarization embedded into their Speech-to-Text software as developers use Speaker Diarization to identify speakers within a transcript provided by Speech-to-Text. Offering them jointly simplifies the development process. However, it limits developers to choose what works best for them. Engine-agnostic Speaker Diarization works with any Speech-to-Text software. Developers who are unsatisfied with the performance of embedded Speaker Diarization or those who prefer a Speech-to-Text software that doesn't offer embedded Speaker Diarization can use Falcon Speaker Diarization with Speech-to-Text of their choice.

Can I use Falcon Speaker Diarization with OpenAI Whisper Speech-to-Text?

Yes, you can use Falcon Speaker Diarization with OpenAI’s Whisper Speech-to-Text or any other automatic speaker recognition engine, including but not limited to Amazon Transcribe, Google Speech-to-Text, and Microsoft Azure Speech-to-Text.

Does Falcon Speaker Diarization support real-time Speaker Diarization?

Falcon Speaker Diarization doesn’t support real-time Speaker Diarization out-of-the-box. If your use case requires real-time Speaker Diarization, please engage with Picovoice Consulting.

Can I use Falcon Speaker Diarization for free?

Falcon Speaker Diarization is free to use with Picovoice’s Free Plan.

Which platforms does Falcon Speaker Diarization support?

* Falcon Speaker Diarization mobile support is currently in closed beta.

How do I get technical support for Falcon Speaker Diarization?

Picovoice docs, blog, Medium posts, and GitHub are great resources to learn about voice AI, Picovoice technology, and how to perform speaker diarization. You can report bugs and issues on GitHub. If you need help with developing your product, you can purchase the optional Support Add-on or upgrade your account to the Developer Plan.

How can I get informed about updates and upgrades?

Version changes appear in the and LinkedIn. Subscribing to GitHub is the best way to get notified of patch releases. If you enjoy building with Falcon Speaker Diarization, show it by giving a GitHub star!