
Liability attached to wrong or misleading answers from AI chatbots is a growing concern in the wake of press investigations and a Canadian court ruling. Meanwhile, overall trust in AI shows signs of sharp deterioration, according to a new study.
AI’s tendency to hallucinate has generated some well-publicized examples of mishandled customer support. Recent high-profile incidents include one in the UK where the AI chatbot providing customer support for an international delivery service described it as “the worst delivery service in the world.”
There is a growing concern that AI mishaps may become more than embarrassing incidents. AI’s miscues can create liability issues for operations using AI chatbots given that AI’s tendency to hallucinate is a feature seemingly baked into the technology rather than a bug that can be eradicated. For example, a Washington Post investigation into AI chatbot assistants used by H&R Block and TurboTax found that the chatbots often delivered inaccurate and misleading responses about taxes.
A legal case in Canada puts the issue in sharper relief. In Moffatt vs Air Canada, the plaintiff, relying on information provided by the airline’s chatbot, applied for a reduced bereavement rate retroactively. Air Canada rejected the request, saying that its chatbot had provided inaccurate information. In its defense, AI Canada lamely argued that the AI chatbot was a separate legal entity responsible for its own actions.
The British Columbia Civil Resolution Tribunal rejected Air Canada’s argument, ruling the airline was liable for negligent misrepresentation. The court noted that “while a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.”
“The AI-powered chatbot is unlikely to deliver on the promise of fully replacing the human element of customer service,” says veteran technology journalist Pete Pachal who authors an AI-focused Substack newsletter called The Media Copilot. “Raw outputs will never be 100 percent perfect, and the liability that can be attached to a single wrong answer is potentially devastating, regardless of how many disclaimer notices you slap on top of it.”
Meanwhile, trust in AI is seemingly eroding, exacerbated by incidents like the debut of Google’s Gemini AI which generated inaccurate historical images that sometimes replaced White people with images of Black, Native American and Asian people.
That lack of trust is evident by government’s reluctance toward the adoption of AI-powered chatbots. According to a Gartner survey, less than 25% of government organizations will have citizen-facing services powered by AI technologies by 2027. The worry is that AI chatbots will wind up talking trash to the general public. “Risk and uncertainty are slowing GenAI’s adoption at scale, especially the lack of traditional controls to mitigate drift and hallucinations,” Gartner analyst Dean Lachter told The Register.
More broadly, a 2024 survey by Edelman of consumers indicates trust in AI companies is quickly eroding with a decline from 43 to 35% in the U.S. in the past year. Among developed countries, 43% reject the growing use of AI while just 21% embrace it. The top barriers to AI adoption are privacy concerns as well as the potential harm to people and society. Respondents also worry that AI may not have undergone sufficient testing and that AI may “devalue what it means to be human.”
The Edelman survey also suggests that the fast pace of AI development has created an understanding gap with survey respondents indicating that they would be more excited by AI if they could see the benefits for both themselves and society.