Call centers are the first point of contact with customers. Successful call center operations streamline the experience and provide exceptional insights that help companies differentiate themselves, doubling the shareholder value over ten years. However, it’s not easy to generate insights and use them. We listed three challenges of call centers in implementing speech analytics.
1. Increasing transcribed data coverage
Voice data is not structured, hence analyzable. The most common approach is converting voice to text. However, cost and accuracy are the biggest challenges to transcribing conversations.
Transcribing an hour-long recording by a human transcriber takes four hours, which is costly and not feasible. Most call centers randomly select two to four interactions per agent per month, which is less than 2% of interactions.
Speech-to-text (STT) adoption in call centres automates the transcription process and helps significantly compared to human transcribers. Yet, running large ASR models in the cloud is still costly and hinders wider adoption. Plus, other factors, such as accuracy, are integral for choosing speech-to-text. Poor accuracy does more harm than good. If the software misses “not” in “I am not very happy with your service.”, the results will be misleading.
Recent improvements in deep learning have made local speech-to-text with cloud-level accuracy possible. For example, Leopard Speech-to-Text is up to 20x more affordable. Now, enterprises that transcribe only 5-10% of interactions can transcribe every conversation for the same price.
3. Generating insights from transcribed voice data
Data alone does not mean much without insights. Yet, finding the right analytics tools is challenging but rewarding, as proven by a Mckinsey study.
Speech analytics has not been mainstream yet. Currently, speech analytics is primarily limited to keyword and phrase detection. So, companies can segment customers based on their sentiments, intents or interests and monitor agents’ performance or compliance.
Advanced analytics, such as predictive analyses for real-time coaching, requires investment. Tools that analyze speech data directly, such as speech emotion detection, are not widely available. Most call centers do not have in-house machine learning experts to build them, and big tech, which dominates the market, does not offer customization.
3. Applying insights to improve the bottom line
Call centers use insights to
- capacity planning to improve productivity,
- automate tasks to decrease average handling time and cut labour costs,
- train the agents to achieve higher first-call resolution rates,
- take proactive actions to minimize the churn rate and increase revenue by creating cross and up-sell opportunities,
- ensure compliance protocols to prevent fraudulent activities or toxicity in conversations
After transcribing and processing conversations, enterprises should decide on initiatives that contribute to their success and be able to execute them successfully.
An initiative might be to build an IVR for routine tasks and train the agents for more idiosyncratic requests. An insight might be that users do not need to talk to an agent to reset their passwords. The success of this initiative also relies on accuracy and cost. An IVR with poor accuracy can cause higher churn let alone fulfilling its promises.
Customer care is a big industry evolving with the advances in AI. Not surprisingly, big tech is interested in it as well. In March 2022, both Microsoft and Google announced further investments in call center solutions. Big tech becoming a competitor to its customers or dominating particular verticals is not new. Access to customers’ data and control over the tools such as speech-to-text give big tech a unique advantage.
Speech analytics is vital to maintain call centers’ competitive advantages. For more information, contact sales or read more: