Some people still can’t shake their dystopian view of artificial intelligence (AI) as something straight out of a science-fiction movie.
Existential concerns about AI were underscored recently by test results that revealed instances of “scheming” by OpenAI’s o1, its most advanced AI model with “complex reasoning” capabilities. But concerns run deeper, from apps and lawsuits to cautionary futurists.
Research continues to pour in on the dangers of building relationships with chatbots as millions of people spend hours a day bonding with digital companions, according to a chilling Washington Post report last week. While AI friendships often start with the promise of curing loneliness, the report found, some have ended in suicide.
Another Post dispatch, meanwhile, described a Texas lawsuit against Character.ai, claiming its chatbots urged a son to kill his family.
Former Google CEO Eric Schmidt has seen the future of AI, and it is pitch black. He fears the fast-moving technology could pose a serious threat to humanity within five to 10 years.
Schmidt envisions a horrifying parallel between AI’s breakneck development and the creation of the atomic bomb during World War II. “After Nagasaki and Hiroshima (in Japan), it took 18 years to establish treaties banning nuclear tests,” he said in a summit hosted by Axios in Washington, D.C., last year. “We don’t have that kind of time today.”
Schmidt’s dim view of AI, which is a cornerstone of Alphabet Inc.’s business strategy, is predicated on a premise shared by several deep thinkers: The ability, over time, of AI bots to make independent decisions and potentially access weapons, mismanage nuclear facilities, or defy humans.
Schmidt believes an international regulatory body akin to the Intergovernmental Panel on Climate Change is necessary to guide policymakers in navigating the risks and opportunities presented by AI.
Silicon Valley venture capitalist David Sacks, a former PayPal alum named AI and cryptocurrency czar by President-elect Donald Trump, is expected to take a hands-off approach to AI regulation. Conversely, President Biden’s administration in October issued an executive memo on AI safety and security.
Academy Award-winning director James Cameron has concerns mirroring those of Schmidt. Cameron sees futuristic parallels between AI advancements and the evil android world depicted in “The Terminator,” released 40 years ago.
“I warned you in 1984, and you didn’t listen,” Cameron said last year on CTV News.
“I think the weaponization of AI is the biggest danger,” he said. “I think that we will get into the equivalent of a nuclear arms race with AI, and if we don’t build it, the other guys are for sure going to build it, and so then it’ll escalate. You could imagine an AI in a combat theatre, the whole thing just being fought by the computers at a speed humans can no longer intercede, and you have no ability to deescalate.”
Though Schmidt and Cameron warn of AI as a potential threat, others believe the current state of AI technology lacks the sophistication to pose an immediate threat. AI researcher Yann LeCun, a director at Facebook parent Meta Platforms Inc., recently told the Financial Times. “The debate about existential risk is very premature,” he said.