AI agents, AGI, AI, GenAI, security, AI brain

Will the rise of a better-than-human artificial general intelligence (AGI) precipitate an extinction-level event for humanity? That’s the fear rising in several quarters as the timeline for the arrival of AGI shrinks to the near term amidst concerns that the companies developing the technology aren’t putting proper safeguards in place even as they race towards its development.

The latest portents of potential doom comes from a group of 13 current and former employees of Open AI and Google’s DeepMind projects. The June 4th open letter penned by this group has the support of AI luminaries like Geoffrey Hinton, known as the “godfather of AI.” Several have ties to a movement called “effective altruism,’ which seeks to mitigate the potential worst impacts of AI. These include not just a catastrophic event but also issues like job displacement, product misuse and manipulation of the public through the use of things like very realistic “deep fakes.” The biggest risk is the loss of control of autonomous AI systems.

AI Warning

The open letter argues for a right to warn about artificial intelligence by providing avenues and safeguards for whistleblowers inside AI companies. Whistleblower protection is needed because “AI companies have strong financial incentives to avoid effective oversight and we do not believe bespoke structures of corporate governance are sufficient to change this.”

More ominously, “AI companies possess substantial non-public information about the capabilities and limitations of their systems,” data these companies are not likely to share voluntarily. The group says “whistleblowers should not be retaliated against if confidential information is given to a company’s board, regulators or an appropriate independent organization.”

AGI Safety Measures

AI companies may resist disclosures of what they see as intellectual property. For its part, companies like OpenAI say they have safeguards in place. OpenAI, however, recently disbanded its year-old “superalignment” team whose responsibilities included AGI safety measures, a widely-reported move that may have prompted the open letter.

AWS

The “Right to Warn” group includes former prominent OpenAI employees Jacob Hilton, Daniel Kokotajlo, William Saunders, Carrol Wainwright and Daniel Ziegler and six anonymous current employees which perhaps speaks to the risk they perceive. Other signatories are Ramana Kubar formerly of Google DeepMind and Neel Nanda who currently works there and is a former Anthropic employee.

Some researchers like Daniel Kokotajlo believe AGI may manifest itself as early as 2027, a sharp contraction from past predictions that AGI was still decades away, due to the speed of the technology’s development. Kokotajlo thinks the chances that an AGI could severely harm or destroy humanity stand at 70%.

AGI Systems are Capable of Deception

AGI worries perhaps are fed by studies like that at MIT, which last month found that AI systems already are capable of lies and deception despite being instructed to be truthful, honest and helpful. MIT researchers specifically cited the performance of an AI called Cicero developed by Meta in a board game called Diplomacy. The MIT assessment contradicts Meta’s analysis of Cicero’s gameplay which it described as “largely honest and helpful.” MIT’s researchers found that upon closer examination, “Cicero engages in premeditated deception, breaks the deals to which it agreed and tells outright falsehoods.”

On a more cosmic scale, AGI may be the answer to a puzzling paradox that has baffled astronomers. Evidence suggests conditions in the universe would encourage the development of intelligent life but as yet no extraterrestrial “technosignatures” have been detected. Professor Michael A Garrett, who chairs the astronomy and astrophysics theory group at the University of Manchester and is director of the Jodreil Bank Center for Astrophysics in England, proposes that this “Great Silence” is caused by an artificial superintelligence (ASI) that snuffed out advanced “nuisance” biological civilizations before they could develop mitigating strategies like a diversified multi-planet existence. Garrett, whose research paper was published this month in Acta Astronautica, suggests the longevity of technical civilizations is less than 200 years, underscoring the need to regulate AI. “Failure to do so could rob the universe of all conscious presence,” writes Garrett.

The roots of this current debate might be dated to 2014 when two competing visions arose. On one side was the esteemed scientist Stephen Hawking who warned that “the development of full artificial intelligence could spell the end of the human race.” That same year Mark Zuckerberg, who now leads the push for the adoption of Meta AI, issued his famous “move fast and break things” mantra that the tech industry seems to swear by. The Hawking camp, by contrast, appears to be inducing warning fatigue.

Meanwhile, there are signs of what may be an alternative apocalypse in the eyes of many. On the morning of June 4th, AI platforms ChatGPT, Gemini, Perplexity and Claude simultaneously experienced outages for several hours. The cause has not been satisfactory explained.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Edge Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY