Hackers, GenAI, cyber, AI, enhancing, cybersecurity, digital age, AI, AI threats and cybersecurity

MIT researchers have assessed the AI risk, and they have at least 3,000 reasons to be worried.

Those are the number of real-world instances where AI systems “caused or nearly caused harm,” according to an AI Incident Database compiled by the university’s AI Risk Repository.

“The risks posed by Artificial Intelligence (AI) are of considerable concern to a wide range of stakeholders including policymakers, experts, AI companies, and the public,” researches concluded in a 79-page report.

Dividing the threat under 43 casual taxonomy categories, the paper assigns AI risk based on entity (human, AI, other), intent (intentional, unintentional, other) and timing (pre-deployment, post-deployment, other). What it discovered was the broad, deleterious influence of AI on discrimination, privacy, misinformation, cybercrime, socioeconomic and environment affairs and other factors.

For example, researchers found most risks (51%) were caused by AI systems rather than humans (34%), and they emerged after the AI model had been trained and deployed (65%) rather than before (10%). A similar proportion of risks were presented as intentional (35%) and
unintentional (37%).

AWS

“Because there is no canonical set of risks, organizations developing AI are more likely to present risk-mitigation plans that fail to address a comprehensive set of risks (cf. Anthropic, 2023; Google DeepMind, 2024) or lack detail (Anderson-Samways et al., 2024),” researchers said in the report.

Researchers called out advances in AI that have made possible “dual-use technologies” like voice cloning, deep fakes, content generation and data-gathering tools that are cheaper, more efficient and easier to use. “With modest hardware requirements, these technologies are now within the reach of a broader group of users, including those with malicious intent,” they wrote. “Disinformation is already a serious issue and involves the deliberate propagation of false or misleading information.”

They added: “AI may be used to gain a political or strategic advantage or to cause harm at scale through cyber operations or the development and use of weapons.”

The report comes as California lawmakers consider a bill, called Safe and Secure Innovation for Frontier Artificial Intelligence Models Act , that would prevent large AI models from being used to cause “critical harms” against humanity. It applies to AI models that cost at least $100 million and require 10^26 FLOPS during training.

“It’s still the super early days,” Scott Clark, chief executive of Distributional, a startup that tests AI for reliability, said in an interview. “It’s like when the automobile was first invented, and there were no seat belts or driving laws on speeding. This is a natural progression, and you need to find a middle ground.”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY