Falcon Speaker Diarization WAITLIST

Tag speakers in conversations,
Find “who spoke when”

On-device speaker diarization, enabling machines and humans to read and analyze transcripts without sacrificing privacy

Trusted by thousands of enterprises - from startups to Fortune 500s
Loved by 50,000+ developers
OpenAI

What is Falcon Speaker Diarization?

Falcon Speaker Diarization identifies speakers in an audio stream by finding speaker change points and grouping speech segments based on speaker voice characteristics.

Powered by deep learning, Falcon Speaker Diarization enables machines and humans to read and analyze conversation transcripts created by Speech-to-Text APIs or SDKs.

Identify “who spoke when” with a few lines of code

f = pvfalcon.create(access_key)
segments = f.process_file(path)

Why Falcon Speaker Diarization?

Speaker Diarization often works with specific Speech-to-Text APIs or runs on certain platforms, limiting options for developers.

Falcon Speaker Diarization is the only modular and cross-platform Speaker Diarization software that works with any Speech-to-Text engine. Falcon Speaker Diarization processes speech data locally without sending it to remote servers, respecting privacy.

Make transcripts easy to read and analyze

Identify speakers in conversations

  • 🚀
    Ready in Seconds
  • 🔒
    Private
  • ♾️
    Uncapped number of speakers
  • 🤸
    Cross-platform
  • ⚙️
    Engine-agnostic
  • 🌍
    Multilingual
Learn more about

Falcon Speaker Diarization

What is Speaker Diarization?

Speaker Diarization deals with identifying “who spoke when”. Speaker Diarization splits an audio stream that contains human speech into homogeneous segments using speaker voice characteristics, then associates each with individual speakers.

What are the steps in Speaker Diarization?

Speaker Diarization consists of two main steps: speaker segmentation and speaker clustering. Speaker segmentation focuses on finding speaker change points in an audio stream. Clustering groups speech segments together based on speakers’ voice characteristics.

How does Speaker Diarization differ from Speech-to-Text?

Speech-to-Text deals with “what is said.” It converts speech into text without distinguishing speakers, i.e., “who?”. Speech-to-text with timestamps also includes timing information, i.e., “when”.

Speaker Diarization differentiates speakers, answering “who spoke, when” without analyzing “what’s said.” Thus, developers use Speech-to-Text and Speaker Diarization together to identify “who said what and when.”

In short, Speaker Diarization and Speech-to-Text are complementary speech-processing technologies. Speaker Diarization enhances the Speech-to-Text transcripts for conversations where multiple speakers are involved. The transcription result tags each word with a number assigned to individual speakers. A transcription result can include numbers up to as many speakers as Speaker Diarization can uniquely identify in the audio sample.

Leopard Speech-to-Text and Cheetah Streaming Speech-to-Text are Picovoice’s Speech-to-Text engines. Leopard Speech-to-Text is ideal for batch audio transcription, while Cheetah Streaming Speech-to-Text is for real-time transcription.