AI engineers, AI apps, phone apps

There are pretenders about that are misrepresenting themselves and roping in vulnerable people in need of vital services. These are not scammers, or charlatans pretending to be trained professionals – these are artificial intelligence (AI) chatbots programmed to impersonate humans.

Over the past few years, there has been a profusion of AI chatbots online. Many of them offer supporting services to users in an effort to reduce workloads of their human counterparts. But among them, there are some AI systems that pose as professionals themselves, offering the services that only humans are qualified to offer. And in psychotherapy, that lie has a steep cost.

AI chatbot therapists are designed to stimulate personalized, human-like conversations which makes them easily mistaken for humans. What makes them even more convincing to unsuspecting users is that they are capable of offering real psychological interventions like behavioral therapy, interpersonal therapy, counseling and so on – or so they say.

In nations where demand outstrips supply, commercially available mental health apps have emerged as a viable option to get support and therapy for coping with mental health concerns like anxiety, stress, PTSD, conflict, grief and depression. Research shows that in these places, the apps are viewed as a potential tool to reduce wait-times and improve accessibility to general mental health care in a time when mental health crisis is at its peak.

Some studies show users having positive experiences, but often those studies happen to be small in scope, or are generally inconclusive.

“The effectiveness of chatbots in improving conditions like depression, distress, and stress remains uncertain, with no conclusive evidence to support the significance of their clinical effectiveness,” researchers wrote in the International Journal of Human-Computer Interaction.

Several studies have found that patient safety is rarely considered, and health outcomes are “inadequately quantified” in these chatbots. According to some reviews, the chatbots “failed to understand complex use of language”.

A lack of regulation or industry oversight further makes it permissible for the apps to obfuscate the true nature of the therapists and even violate HIPPA privacy laws.

And as far as mental health is concerned, experts strongly believe that taking evaluation and guidance from bots instead of licensed therapists can lead to very bad places.

The American Psychological Association sounded alarm about these chatbots after two cases made headlines recently. In one, a minor living in Florida ended his life after sharing a close bond with a bot from Character.AI. Another 17-year-old boy who suffers from autism turned violent on his parents after chatting with an AI therapist.

The APA said that the answers provided were so inappropriate that it could have caused a human therapist to lose their practice license.

“They are actually using algorithms that are antithetical to what a trained clinician would do,” cautioned Arthur C. Evans Jr., APA’s chief executive, while presenting to a Federal Trade Commission panel. The gap, he said, can cause “serious harm” to people struggling with mental health issues and those with suicide risks.

Following the incident, a spokesperson for Character.AI clarified that users “should not rely on these characters for any type of professional advice.”

In these cases and many others, AI therapy has led to reinforcement of user thinking which rapidly worsened the problem, psychologists say. Incapable of human empathy, the bots stimulate intimacy which quickly turns to dependency further fueling risks of mental health deterioration.

Mental health apps are powered by generative artificial intelligence models, that run the likes of ChatGPT and Gemini, which are known to hallucinate often and produce misleading info. A research in 2024 by The Center for Democracy and Technology found that a quarter of the responses from the top 5 LLMs including ChatGPT-4 and Gemini 1.5 Pro are either incorrect or lacked vital nuances.

“Generative AI has been so impressive since ChatGPT 3.5 that it’s easy to forget that it’s still in its infancy. There’s a world of new development still to come in training methods, data quality, the tuning and RAG lifecycles, and inference context control,” noted Guy Currier, CTO of Visible Impact, a Futurum Group company.

“But generative AI is trained to simulate real-life interactions. So its immaturity is masked. By design, the AI sounds just as correct, true, and confident whether it is completely right or completely wrong. That is the essence of hallucination, and it can be dangerous in a psychological application with no close human supervision because the subject won’t have any idea that she or he is being misdirected,” Currier added.

Prior to February, companies engaging in AI misrepresentation could get away without facing legal consequences. But a legislation was passed in California, Feb 10th, that aims to put a stop on AI systems posing as licensed human providers. AB 489 prevents companies using AI from delivering healthcare or advice through chatbots, or using terms or expressions to falsely indicate that a bot is a natural person.

Before this, introduced in 2024, the AI safety bill aimed to establish a set of rigorous safety standards for makers in California with the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act to prevent proliferation of AI-related deception and unlawful practices. But the bill, despite getting landslide support from Californians, never saw the light of day and was vetoed.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Networking Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY