Key highlights

150% improved call coverage efficiency compared to previous systems

40% reduced file processing time

100% optimized call-handling system that scaled operations to process 25k calls/day

A global direct broadcast satellite service provider improved its call center performance using an AI solution. It now handles 25K daily calls, converting speech to text in real time, summarizing conversations, and analyzing customer sentiment for faster responses. Efficiency increased by 150%, processing time dropped by 40%, and issue tracking for repeat callers and unresolved problems was enhanced, resulting in higher customer satisfaction and future-ready live streaming and multi-channel support.

Overview

Modernizing call center operations at scale with AI and Databricks

The client faced challenges in analyzing daily customer interactions due to limited AI coverage and delayed batch processing. Zensar implemented an AI-driven solution built on Databricks for scalable data orchestration, OpenAI Whisper for accurate voice-to-text transcription, and LLM-based data processing (deriving insights). Python microservices with RESTful APIs ensured seamless integration with existing systems, enabling real-time responsiveness and boosting operational efficiency.

  • Used OpenAI Whisper for speech-to-text 

  • Applied LLM summarization and categorization 

  • Used Databricks for scalable data orchestration across a hybrid cloud spanning on-prem and Azure 

  • Developed Python-based microservices with REST APIs for real-time integration with existing systems

  • Achieved full coverage of 25K calls/day 

  • Reduced processing time by 40% 

  • Improved detection of repeat callers and sentiment analysis 

  • Established a foundation for future stream processing and multi-channel integration

Challenges

Identify gaps in call analysis and responsiveness, implement corrective actions, and enhance efficiency to improve customer experience.

The client’s AI system handled only 10K of the 25K daily calls, leaving most interactions unanalyzed. Batch processing slowed issue detection and blocked real-time responsiveness. Without tracking repeat callers or unresolved issues, service quality and agility were impacted, creating an urgent need for a smarter, scalable solution to improve coverage, speed, and customer experience. 

Solution

AI-powered real-time call analysis using Databricks

Zensar delivered a robust AI solution to overcome the client’s challenges. The system leveraged Databricks for scalable data orchestration and OpenAI Whisper for precise voice-to-text transcription. Advanced LLM models summarized and categorized conversations, turning raw data into actionable insights. To ensure flexibility and speed, Python microservices are integrated seamlessly with existing systems via REST APIs. Databricks workflows automate ETL pipelines, providing reliability and performance. 

1.

Voice-to-text transcription using OpenAI Whisper

  • Databricks File System (DBFS) to store intermediate data and metadata

2.

LLM-based summarization and categorization

  • Created Databricks tables using legacy hive metastore with schema enforcement, for better data reliability and consistent error-handling logs

3.

Databricks for scalable data orchestration across hybrid cloud

  • Databricks Workflows for automated ETL, reducing manual overhead

4.

Python microservices with RESTful APIs to boost performance and enable modular, real-time integration with existing systems

  • Incremental data processing for performance optimization  to cater to faster data processing for larger datasets

Impact

Delivered real-time insights and operational agility at scale
1.

150% efficiency increase

2.

40% reduced processing time

3.

Full coverage of 25K calls/day

4.

Improved repeat caller detection and sentiment analysis

Business Outcome

The solution transformed call center operations by delivering real-time insights and complete coverage of 25K daily interactions. Efficiency improved by 150%, and processing time dropped by 40%, enabling faster responses and proactive issue resolution through sentiment analysis and repeat caller detection. Built on a future-ready architecture, it supports stream processing and multi-channel integration. Leveraging Apache Spark within Databricks, we efficiently processed massive structured and unstructured datasets from Azure Blob storage, ensuring scalability and reliability.

Conclusion

By leveraging Databricks’ scalable data orchestration and LLM-driven insights, the solution transformed call center operations into a real-time, AI-powered ecosystem. Databricks notebooks provided an interactive, collaborative space for developing a data processing pipeline, including speech-to-text with the OpenAI Whisper model, and data processing (insight generation) with a GPT LLM. It enabled full coverage of 25K daily calls, accelerated issue detection, and improved customer experience through proactive resolution. With modular microservices and automated workflows, the organization achieved a 150% efficiency boost, reduced processing time by 40%, and established a robust foundation for future stream processing and multi-channel integration.

Let's connect

Stay ahead with the latest updates or kick off an exciting conversation with us today!