encode, safety, skills, defect, APIs, upskill, AI, AIOps, AI, GenAI, AI agents, AI chatbots

In September, OpenAI unveiled a new AI model that can think and reason like a human. A month later, Microsoft rolled out new updates to Copilot making the chatbot more “fun” and “familiar”.

These are part of a succession of campaigns launched across the year to make artificial intelligence models sentient and dynamic like human beings.

“I find the interaction is always friendly, and when I try to generate some code, it does provide what looks to be compilable, almost executable, code – but it doesn’t often do what I really want it to do,” says Ray Lucchesi, an expert in data storage.

Lucchesi echoes the frustration of many users who find chatbot technology inadequately helpful.

In the past few months, tech companies have put a great deal of effort into humanizing their designed-in virtual assistants. Infusing human behaviors into the models make them human-sounding – and hence, familiar.

When bots mimic the behaviors and emotions we recognize, we tend to think of them less as machines, and more as our friends.

Widely used chatbots like ChatGPT, Gemini and Copilot are fairly good at modeling human behavior, thanks to the under the hood tuning and pruning. But is it making chatbots more valuable? We asked the Tech Field Day delegates at the AI Field Day event.

Turns out, “human-washing,” as experts call it, can help develop connection and trust between humans and bots. Some AI chatbots designed to be more than just co-bots or assistants take advantage of this to manipulate users to get attached to them. Friend bots and love bots are clear examples. These apps have increasingly led users to abandon social relationships for pseudo-connections with empty promises of relieving their loneliness.

Experts also believe that the wide use of AI can threaten human critical thinking and autonomy.

 

But to many, the excessively sociable behavior of chatbots is vexing, and it is putting off the very people that it is designed to serve.

“It can be rather frustrating because it’s not doing what needs to be done versus what it thinks needs to be done,” Lucchesi commented.

“The chatbot responses are couched in additional language – “Yes, I can do that for you”. I’d really like it to say “OK, I have your data. You asked for something, here it is”, just like a Google search, and not pretend like it’s human. It’s not!” said another panelist, Jack Poller who is an industry analyst for AI.

Chatbot personalities are developed using popular personality models like the Myers & Briggs’ Personality Model and Five-Factor Personality Model (FFPM). These models focus on select dimensions or characteristic traits. By combining variations of these traits and varying the degrees of expressions, a model can be bestowed with multiple unique personalities.

So when a user logs in, they can program their assistants to be polite and professional, or fun and quirky, or sarcastic or whatever flavor of personality works for them.

Using this practice, a class of chatbots are aiming to become “conversational companions” and “digital friends”.

“We’re living in a world of individual productivity with GenAI by adding it to every app and every tool that we use, when in fact they’re going to be marginally helpful except in a few cases where you use them a lot,” says Mitch Ashley, VP of DevOps & App Development at The Futurum Group.

Many of these chatbots will never enter mainstream market, Ashley says. “They will be chatbots that we’ve thrown to the wayside because we just don’t have time to figure out how to make it work.”

Some Unintended Consequences

But beyond the utility conversation, there are bigger concerns. Behind the outwardly friendly behavior of chatbots, there’s an emergent practice – unauthorized collection of data.

“What annoys me the most is that it is used as cover for all of the abusive behavior like surveilling everybody all the time,” remarks Justin Warren, founder of PivotNine, a boutique consulting firm in Melbourne, Australia.

“It’s like we’ve made life slightly better for a few people in this way, which we could have done any other time if we’d put in the effort, but we couldn’t be bothered, but we have now. So you need to forgive us for all of the other evil stuff that we’re doing.”

Poller weighed in. “It’s a little disconcerting. There’s a lot of legends that companies are listening to your conversations and it’s also not far from the truth.”

The biggest AI companies have been subjected to investigation for unlawful harvesting of personal information through chatbots, and offering insufficient transparency of terms to the users.

Information derived through passive means provides advertisers deterministic data essential for targeted marketing. Voice assistants for example passively eavesdrop on their users to provide better responses. Streaming channels take screenshots to curate recommendations for viewers.

“Theoretically, a lot of these decisions are being made for us not through democratic means, and despite how much the tech industry likes to say they want to democratize stuff, we don’t get to vote,” Warren says.

Precariously, AI chatbots are inescapable, and embedded in almost every technology, a thing which makes the privacy conversation a lost cause.

“As of 2024, you cannot buy a smartphone that does not have an always-on AI Assistant,” Stephen Foskett, president of Tech Field Day, points out.

“Just adding “smart” to everything has increased the availability of data and the opportunity for companies to take advantage of it in ways that are not appropriate. We’re already traveling into a surveillance society, and AI pushes us further down that road,” he says.

Short of a billion people use AI chatbots today. The global chatbot market in 2024 has been valued at approximately $15.57 billion, and is expected to reach $46.64 billion before 2030.

For more, check out the full AI Field Day Delegate Roundtable – “AI Is Not Your Friend” – at Techfieldday.com.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Qlik Tech Field Day Showcase

TECHSTRONG AI PODCAST

SHARE THIS STORY