From virtual agents to smart call routing and real-time guidance, AI is starting to make a real impact in contact centres. It’s helping teams cut manual work, improve service quality, and respond faster to customer needs.
But while the potential is clear, scaling AI across a contact centre isn’t straightforward. Many organisations run pilots but struggle to move into full production or deliver measurable outcomes.
In our recent webinar, Breaking the barriers to AI at scale, we unpacked four of the biggest blockers holding teams back. With insights from Raïsa van Olden, Product Marketing Director, Matt Hughes, Head of Product, and Tue Martin Berg, Executive Director, the session explored how high-maturity organisations are moving past these obstacles — and what practical steps you can take to do the same.
Our AI Maturity survey and wider industry research shows a clear pattern: enthusiasm for AI is high, but adoption is lagging. Only about one in three companies move AI out of pilot; just over a third deploy models in real operations; and a tiny minority achieve impact at scale.
Meanwhile, millions of employees already use GenAI tools in their everyday work, even though many companies don’t feel “ready”. It’s a classic adoption gap, and it’s why clarity of direction matters more than perfection. As Raïsa explained during the session, “The danger is that adoption happens by accident, not by design. That’s why clarity of direction is more important than perfection.”
The real question isn’t whether AI is being used, but whether organisations are embedding it strategically, safely, and at scale.
1. Process complexity: AI can't scale in a broken workflow
Rigid, manual steps. Disconnected systems. Unclear ownership. When AI is “bolted on” to yesterday’s process, you get duplicate effort, hand-offs and frustration, not impact. As Matt put it, “Adding AI without redesigning the flow is like putting a high-performance engine in a car with square wheels. The power’s there, but you’re not going anywhere.”
AI that isn’t fully integrated, or is simply bolted on to legacy systems, often creates more work, not less. It can introduce duplicate steps, increase frustration, and lead to poor adoption across the team.
Raïsa summarised it neatly: “AI bolted onto legacy processes won’t solve complexity, it risks adding another layer. Integration and redesign go hand-in-hand.”
How to move forward:
Even the best models fall flat if people aren’t confident, aligned and trained. That’s why high-maturity organisations invest in shared mental models: how AI works, where it adds value, and how to use it responsibly in daily work.
As Matt explained, “Treat knowledge as a two-way street: people learn from AI — and AI learns from people through feedback.”
Without upskilling, even the best AI model can feel like a black box. That often leads to underuse or outright resistance. And this isn’t just about your IT team: everyone from frontline agents to CX leaders needs to understand how AI fits into their work.
Tue put it simply: “High-maturity organisations are those where AI isn’t just a tool — it’s part of the culture. Training and trust make the difference.”
How to move forward:
Another stumbling block is data. Pilots often shine because they use clean, well-structured data. Production is messier: multiple systems, missing fields, inconsistent formats. AI is only as good as the information you feed it — and the integration and quality controls you put in place. Without a strong data foundation, AI risks being underutilised or, worse, misinformed.
As Matt highlighted, “AI is only as good as the data you feed it — and the governance you wrap around it.”
How to move forward:
In the webinar, Raïsa shared how Telmore overcame this barrier by automating call tagging with Conversational Intelligence. Instead of relying on manual wrap-up, AI now tags interactions consistently and accurately, giving leaders reliable insight into customer drivers.
Finally, legal and compliance concerns are front of mind for many leaders. In Europe, trustworthy AI starts with data protection and transparency. Tue’s guidance was simple: involve your risk-mitigating functions early. They’ll surface the right questions on lawful basis, consent, DPIAs, the AI Act’s transparency requirements, model-training restrictions, and even local employment obligations (including union consultation where applicable).
Tue cautioned that “The smartest move you can make is to involve your risk-mitigating functions early. They’ll ask the questions you might not think of until it’s too late.”
Those questions often cover lawful basis, consent, data protection impact assessments, transparency requirements, model training restrictions, and even union consultation where relevant.
How to move forward:
Across industries, we see a common pattern among teams that move past pilots:
Use this to turn momentum into measurable progress:
AI is already proving its value in contact centres — cutting manual work, boosting quality, and helping agents deliver faster, more human service. But as we’ve seen, the real challenge isn’t getting started; it’s building the clarity, confidence and foundations needed to scale.
The organisations making the biggest strides aren’t those chasing the newest tools. They’re the ones aligning AI with clear CX goals, redesigning processes before they automate, and investing in people as much as technology.
If there’s one takeaway, it’s this: progress comes from intentional steps, not accidental adoption. By tackling process complexity, building skills, strengthening data foundations and addressing compliance early, you can move AI beyond pilots, and turn it into everyday impact.
Did you miss the session? Watch it on-demand here.
This article is based on Puzzel’s Expert Session “Breaking the barriers to AI at scale,” featuring Raïsa van Olden, Matt Hughes and Tue Martin Berg. Quotes and statistics are drawn from the live discussion and session materials.