The short answer would be, yes, customers have a right to know when they’re interacting with an artificial intelligence (AI) agent so that they are able to escalate to a human should they feel the need. And to that end, a simple disclosure that says —“Hi, I’m an AI agent and I’m ready to help you!” would take care of it. But the real objective is to make the experience so indistinguishable that customers don’t mind either way.

With AI agents that field questions and solve problems reliably and organically, experiences on par or even superior to a human agent can be delivered. And when customers respond to that automated opening line by saying “Oh, good,” we’ll know we’ve succeeded.

Customers Can be Understandably Wary of AI Agents

Why do customers care whether an agent is AI or a human? Often, it’s because they’ve grown wary through bad past experiences. The earlier generations of the AI chatbots produced unhelpful, formulaic responses and made frequent missteps which gave them away.

But as AI has advanced beyond large language models (LLMs), the agents have become more human-like, making them less obvious. In a recent survey, fewer than half of consumers (47%) said that it’s very or extremely easy to tell if they are chatting with a human customer service agent or AI.

As customers grow less sure about who or what they’re interacting with, their demand for transparency has grown. 89% respondents agree that companies should disclose if an agent is a real person or AI. In other words, customer trust and confidence remain low even though quality is rising. It’s on us to earn them back.

Better AI Agents Will Promote Trust and Acceptance

But the truth is, at the end of the day, most customers don’t care how their problem is handled— they care just how well. If AI agents prove their worth, disclosure may become not just an ethical obligation, but more of a competitive differentiator, reassuring customers that they’ll be in good (automated) hands.

In fact, nearly half of consumers (48%) in a survey feel AI agents have made customer service more helpful compared with less than a third (31%) who’ve found them less helpful.

The numbers may be skewed by the channel preferences of different generations. Only 32% of Boomers feel that AI agents have made customer service more helpful perhaps because they’d rather pick up the phone than interact online.

Younger generations who’ve grown up with chat, email, and digital messaging far prefer to communicate asynchronously, and they’re much more receptive to AI agents; 56% of Millennials feel they’ve made customer experience better.

But it all comes down to performance—and that’s where there’s still work to be done. Long wait times are one of the most common and loudly voiced complaints. A slow response will send people elsewhere, often to even cancel their order. A CX AI agent that pops up quickly would represent a major improvement over bottlenecked human call centers. Only 17% of consumers feel wait times for customer service have shortened the past two years.

We can’t be sure of the role AI might have played in that, but the figure mirrors another factor in the quality of AI experience: the low ticket resolution rate of CX solutions based on retrieval-augmented generation (RAG), an extension of generative AI that incorporates additional information from external sources beyond its LLM.

RAG can be a useful technology for some use cases, but customer service isn’t one of them—in spite of widespread adoption for the purpose. Broadly speaking,  RAG-based CX AI agents will only achieve a 10 – 20% ticket resolution rate, compared with the 70 – 80% rate possible with an agentic AI solution.

Many consumers may feel AI agents have made customer service more helpful, but RAG-powered agents clearly aren’t doing much to speed up response times.

System architecture makes a difference as well. Another top customer complaint is having to provide the same information multiple times—name, account number, order number, and so on. This has been a common annoyance since long before the rise of AI, but there’s no excuse for it to remain one. Both human and AI systems should be designed to preserve information across handoffs so that either can pick up right where the last one left off.

Ironically, as AI gets better, it can actually become more patient than a human agent, and even mimic empathy to the point that even with full AI disclosure, the customer feels heard. This, too, will help build trust in the technology and gain acceptance.

Escalating Out Should be Easy—and Rare

For consumers across all ages, the top frustration  is when a customer service automated system doesn’t provide an option to connect to a human. Again, this complaint varies with age, with 21% Millennials and Gen Z to 52% ]Silent Generation. Why are they so focused on zeroing out of the AI agent? Often, it’s because they’ve come to doubt its effectiveness—even before they’ve given it a fair chance.

That doesn’t mean businesses should make it harder fir customers to escalate to a human. We can’t force people to accept AI agents and we’d be foolish to try. Instead, we should focus on making them less likely to want to. In fact, if customers know they’ll be able to reach a manager if needed, they may be less frustrated to encounter an AI agent initially. And as they see their question or problem being handled skillfully and efficiently, they may rarely feel the need to escalate at all.

The ethics of AI agent disclosure is a worthwhile conversation, but it should be brief. Let’s accept our responsibility for transparency, and reassure customers about their prerogative to escalate—and then move on to the more important work of delivering exceptional experiences that render the issue moot.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Cloud Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY