To blog overview
AI and Automation
5 min read

Breaking the barriers to AI at scale: How CX leaders turn pilots into progress.

Jeanine Desirée Lund
Content Marketing Manager | Senior Content Strategist
AI adoption in contact centres

From virtual agents to smart call routing and real-time guidance, AI is starting to make a real impact in contact centres. It’s helping teams cut manual work, improve service quality, and respond faster to customer needs. 

But while the potential is clear, scaling AI across a contact centre isn’t straightforward. Many organisations run pilots but struggle to move into full production or deliver measurable outcomes. 

In our recent webinar, Breaking the barriers to AI at scale, we unpacked four of the biggest blockers holding teams back. With insights from Raïsa van Olden, Product Marketing Director, Matt Hughes, Head of Product, and Tue Martin Berg, Executive Director, the session explored how high-maturity organisations are moving past these obstacles — and what practical steps you can take to do the same. 

The current state in CX: AI interest is high, but progress is uneven

Our AI Maturity survey and wider industry research shows a clear pattern: enthusiasm for AI is high, but adoption is lagging. Only about one in three companies move AI out of pilot; just over a third deploy models in real operations; and a tiny minority achieve impact at scale.  

Meanwhile, millions of employees already use GenAI tools in their everyday work, even though many companies don’t feel “ready”. It’s a classic adoption gap, and it’s why clarity of direction matters more than perfection. As Raïsa explained during the session, “The danger is that adoption happens by accident, not by design. That’s why clarity of direction is more important than perfection.” 

The real question isn’t whether AI is being used, but whether organisations are embedding it strategically, safely, and at scale. 

The four barriers holding teams back (and how to overcome them)

1. Process complexity: AI can't scale in a broken workflow

Rigid, manual steps. Disconnected systems. Unclear ownership. When AI is “bolted on” to yesterday’s process, you get duplicate effort, hand-offs and frustration, not impact. As Matt put it, “Adding AI without redesigning the flow is like putting a high-performance engine in a car with square wheels. The power’s there, but you’re not going anywhere.” 

AI that isn’t fully integrated, or is simply bolted on to legacy systems, often creates more work, not less. It can introduce duplicate steps, increase frustration, and lead to poor adoption across the team. 

Raïsa summarised it neatly: “AI bolted onto legacy processes won’t solve complexity, it risks adding another layer. Integration and redesign go hand-in-hand.” 

How to move forward: 

  • Redesign end-to-end journeys before you automate. Remove hand-offs; automate the right steps in the right order.  
  • Embed AI where work happens (agent desktop, QA, knowledge), so it feels invisible and genuinely speeds things up. 
  • Make ownership explicit (who decides, who maintains, who measures). 
  • Focus on integration. Make sure any AI solution fits seamlessly with your existing workflows and platforms. 

2. Knowledge and skills gaps: AI fails when teams don’t know how to use it

Even the best models fall flat if people aren’t confident, aligned and trained. That’s why high-maturity organisations invest in shared mental models: how AI works, where it adds value, and how to use it responsibly in daily work. 

As Matt explained, “Treat knowledge as a two-way street: people learn from AI — and AI learns from people through feedback.”  

Without upskilling, even the best AI model can feel like a black box. That often leads to underuse or outright resistance. And this isn’t just about your IT team: everyone from frontline agents to CX leaders needs to understand how AI fits into their work. 

Tue put it simply: “High-maturity organisations are those where AI isn’t just a tool — it’s part of the culture. Training and trust make the difference.” 

How to move forward: 

  • Build a practical curriculum. Cover essentials like prompt techniques, “trust-but-verify” checks, bias awareness, and decision logs. 
  • Establish an AI use policy, roles and responsibilities, and KPIs for AI-assisted work.  
  • Close the loop: capture agent feedback on answers and continuously improve your knowledge base. 
  • Invest in upskilling your agents. Training should go beyond just using tools — it should help them understand why AI matters and how it supports them. 
  • Make space for cross-functional learning. AI success isn’t just an IT project — involve your service, operations, and customer experience teams early and often.

3. Data limitations: pilots succeed, production stumbles

Another stumbling block is data. Pilots often shine because they use clean, well-structured data. Production is messier: multiple systems, missing fields, inconsistent formats. AI is only as good as the information you feed it — and the integration and quality controls you put in place. Without a strong data foundation, AI risks being underutilised or, worse, misinformed. 

As Matt highlighted, “AI is only as good as the data you feed it — and the governance you wrap around it.” 

How to move forward: 

  • Map data sources early; fix structure, freshness and access before you scale.  
  • Instrument quality checks and monitoring so accuracy doesn’t degrade silently.  
  • Use a single source of truth for knowledge powering copilots and virtual agents. 
  • Prioritise data hygiene. Ensure your information is accurate, relevant, and updated regularly to maximise AI performance. 

In the webinar, Raïsa shared how Telmore overcame this barrier by automating call tagging with Conversational Intelligence. Instead of relying on manual wrap-up, AI now tags interactions consistently and accurately, giving leaders reliable insight into customer drivers.

4. Legal and compliance: build on solid ground

Finally, legal and compliance concerns are front of mind for many leaders. In Europe, trustworthy AI starts with data protection and transparency. Tue’s guidance was simple: involve your risk-mitigating functions early. They’ll surface the right questions on lawful basis, consent, DPIAs, the AI Act’s transparency requirements, model-training restrictions, and even local employment obligations (including union consultation where applicable).  

Tue cautioned that “The smartest move you can make is to involve your risk-mitigating functions early. They’ll ask the questions you might not think of until it’s too late.” 

Those questions often cover lawful basis, consent, data protection impact assessments, transparency requirements, model training restrictions, and even union consultation where relevant. 

How to move forward: 

  • Be transparent: inform employees and customers when AI is in play, in clear language.  
  • Limit retention, encrypt data in transit and at rest, and apply strict access controls and pseudonymisation where possible.  
  • If you use vendors, confirm data location (e.g., EU/EEA by default), cross-border safeguards (SCCs), and documentation such as Transfer Impact Assessments. 
  • Document your AI use. Keep a clear record of where and how AI is being used, and regularly review it to stay aligned with regulations. 

What separates high-maturity teams

Across industries, we see a common pattern among teams that move past pilots: 

  • They align on purpose. They don’t adopt AI for AI’s sake. They adopt it to reduce resolution time, cut repeat contacts, or expand QA coverage. 
  • They integrate AI into operations, not side projects. Tools live inside workflows, with ownership and metrics.  
  • They scale intentionally. Pilots are time-boxed; successful patterns are productised with governance and change management.  

A readiness checklist for moving forward

Use this to turn momentum into measurable progress: 

  • Align your CX goals with AI use cases 
    Pick 1–2 outcomes that matter (e.g., faster wrap-up, better FCR, lower repeat contacts) and design backwards.  
  • Map your current maturity 
    Be honest about where you are across process, knowledge, data and legal. Use that diagnosis to prioritise. (Tips: Take our short maturity survey to see where you stand). 
  • Fix the flow before the model 
    Remove hand-offs, clarify ownership, and integrate AI at the point of work (agent desktop, QA, knowledge).  
  • Level up skills and governance 
    Run hands-on training, publish an AI use policy, define KPIs, and build human-in-the-loop feedback.  
  • Strenghten your data and compliance 
    Clean your data pipelines, add monitoring, and confirm lawful basis, transparency and security controls. Engage your risk teams early.  
  • Scale what works 
    Turn pilot wins into “the new way we work” — with documentation, enablement, and continuous improvement. 

Final thoughts

AI is already proving its value in contact centres — cutting manual work, boosting quality, and helping agents deliver faster, more human service. But as we’ve seen, the real challenge isn’t getting started; it’s building the clarity, confidence and foundations needed to scale. 

The organisations making the biggest strides aren’t those chasing the newest tools. They’re the ones aligning AI with clear CX goals, redesigning processes before they automate, and investing in people as much as technology. 

If there’s one takeaway, it’s this: progress comes from intentional steps, not accidental adoption. By tackling process complexity, building skills, strengthening data foundations and addressing compliance early, you can move AI beyond pilots, and turn it into everyday impact. 

Did you miss the session? Watch it on-demand here: 

This article is based on Puzzel’s Expert Session “Breaking the barriers to AI at scale,” featuring Raïsa van Olden, Matt Hughes and Tue Martin Berg. Quotes and statistics are drawn from the live discussion and session materials. 

 

Stay updated on the latest CX insights, events, and more