
Humans, it turns out, increasingly like large language models (LLMs) at work and home, warts and all. It’s the trust part that keeps getting in the way.
In the latest status update of the love-hate relationship between people and AI, recent polling of Americans strongly suggests a profile of conflicting attitudes toward a technology that could displace them at work but makes their lives easier when it comes to healthcare, transportation, banking and entertainment.
Half of American adults now use OpenAI’s ChatGPT, Google Gemini, Anthropic’s Claude and Microsoft Copilot, according to a new survey from Elon University’s Imagining the Digital Future Center.
“By any measure, the adoption and use of LLMs is astounding. I am especially struck by the ways these tools are being woven into people’s social lives,” Lee Rainie, director of the center, said in an email. “Contrary to the picture that many have about how LLMs are used, our survey shows that the share of those who use the models for personal purposes significantly outnumbers those who use them for work-related activities.”
“These findings start to establish a baseline for the way humans and AI systems will evolve together in the coming years, particularly as we draw nearer and nearer to artificial general intelligence,” Rainie said. “These tools are increasingly being integrated into daily life in sometimes quite intimate ways at the level of emotion and impact. It’s clearly shaping up as the story of another chapter in human history.”
The national poll of 500 people in late January illustrates human-like encounters between users and their tools, which has led to some pleasant and disturbing encounters.
Two-thirds of LLM users often have verbal conversations with the models, and half believe the models are smarter. While there is a general understanding between the two sides, and the model flashes a sense of humor, there is an underlying tension, the Elon study found.
A quarter of respondents said the model makes moral judgments and has contributed to some self-esteem issues. When using LLM, they said they felt lazy (50%), as if they were cheating (35%), frustrated or confused (35%), too dependent on the technology (33%), guilty of bad mistakes or decisions (23%) and even manipulated (21%) by their favorite model.
The results of the Elon study mirror a recent Pew Research paper that found U.S. workers are more worried than hopeful about future AI use in the office.
Voters too remain spooked over national risks posed by AI. Three-fourths (76%) believe foreign actors will use AI to harm the U.S., based on a poll from Americans for Responsible Innovation (ARI).
“Advanced AI is a source of strength for the U.S., but in the wrong hands, it can also be one of our biggest vulnerabilities,” Doug Calidas, senior vice president of government affairs at ARI, said in an email. “Across the board, we need to update national security policy for the AI era.”