cybersecurity, Hackers, GenAI, cyber, AI, enhancing, cybersecurity, digital age, AI, AI threats and cybersecurity, AI strike

As concerns over the destructive potential of artificial intelligence (AI) mount, the Center for AI Safety (CAIS), led by director Dan Hendrycks, published a one-sentence statement signed by dozens of technology leaders warning AI poses an extinction level risk to humanity.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement reads.

The signatories include Sam Altman head of OpenAI, the creator of the popular generative AI application ChatGPT, as well as Demis Hassabis, CEO of Google DeepMind, and dozens of scientists and academics working in the AI field.

The release of the statement follows similar warnings from other tech leaders, including Tesla CEO Elon Musk, who called for a moratorium on the development of AI technology until sufficient guardrails were established.

“While the statement might seem melodramatic or histrionic at first glance, if one parses the sentence carefully, it’s reasonable: The signatories are not equating AI itself with pandemics and nuclear war; they are talking specifically about the ‘risk of extinction from AI’, says Aaron Kalb, chief strategy officer and co-founder of Alation.

He points out nuclear physics and bioengineering have hugely helped humanity, but could also potentially result in various doomsday scenarios, which is why regulatory bodies and international treaties have been established.

“It’s not crazy to hope that we take similar steps to harness the benefits of this new technology and mitigate the risks,” he adds.

Kalb says one could argue that while people can die directly from disease or explosions, bits and bytes are harmless on their own.

“However, a quick glance at history, and the present, shows the indirect devastation that can emerge from words and plans of the sort AI can generate, even before AI systems are given access to power grids, water supplies, and so on,” he notes.

From the perspective of Avivah Litan, Gartner Distinguished VP analyst specializing in AI, the statement comes off as “odd and unprecedented”.

“Here you have tech entrepreneurs and scientists telling the public that the technology many of them are selling can destroy the human race if left unchecked,” she says. “Yet they continue to work on it because of competitive pressures.”

She adds it’s almost like they know they are drowning and want to tell the world that they are about to drown together – but are secretly hoping some white knight in shining armor representing the lifeguard (in this case government officials and regulators) will come in and save them all in time.

“My personal view is that this threat is real and ominous, and we knew it was coming for many years now. We just didn’t know it was coming so quickly,” Litan says. “Once AI becomes smarter than humans, all bets are off for programming AI with safeguards that protect the human race.”

In the meantime, she says the best way to mitigate the threat is to establish an international regulatory body to license and regulate vendors that produce AI foundation models.

At the outset, this new regulatory body should set a timeframe by which AI model vendors can ensure their GenAI models have been trained to incorporate pre-agreed upon goals and directives that align with human values.

“They should make sure AI continues to serve humans instead of the other way around,” she notes. “This will be technically difficult to achieve but is imperative going forward.”

Debasish Biswas, chief technology officer at Aware, says he fully agrees with the call for action depicted in the open letter and believes AI requires careful consideration as its impact to society grows.

“If you see the list of signatories on this open letter, one thing that stands out is how global the list is,” he says. “This underscores the concern that AI is not confined to specific geographies, cultures or political beliefs – this is a global issue.”

From his perspective, it is critical for governments to realize that there needs to be investment in ethical AI development, so the AI research community at large build platforms to automate detection of fake content.

“As an analogy, a platform may take a similar path to how the mRNA was used to fight a broad range of Covid viruses,” he explains.

Erik Gaston, vice president of global executive engagement at Tanium, points out AI on its own is neither good nor evil, but like most things, it can be used to do both good and evil.

“The reality is that the technology is accessible and available to all, and I do not think it is going away,” he says. “With that, we do need to restrict and regulate how it is used and by whom to ensure that bad actors stand out and are visible. I can see AI as part of both the problem and potentially the cure.”

He says publishing the statement highlights the potential risks of the misuse of AI on the level that the publishing body believes could lead to the end of society as we know it.

“It definitely puts society on its heels and raises concerns and fear of AI,” he adds. “My concern is that leading with fear rarely leads to a positive outcome.”

Gaston notes that just like with every evolution of technology such as internet and cloud, everything starts with early adopters and little structure, governance or standards.

“It seems that AI is going to have to go through this same process if it is going to be used properly and in the name of helping people achieve higher,” he says. “We must control the proliferation and ensure that proper use of this technology outpaces the bad actors.”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Qlik Tech Field Day Showcase

TECHSTRONG AI PODCAST

SHARE THIS STORY