Synopsis: In this Techstrong.ai video interview, Mike Vizard talks to Pattern Labs CEO Dan Lahav and Sella Nevo, director of the Meselson Center at RAND Corp. about what’s needed to make sure the weights relied on to build artificial intelligence (AI) models have not been tampered with by malicious actors.

In this Techstrong AI interview, Mike Vizard speaks with Dan Lahav, CEO of Pattern Labs, and Sella Nevo, Director of the Meselson Center at RAND Corporation, about a new report highlighting the vulnerabilities in AI models. The discussion centers around the risks associated with AI model weights—the critical parameters that define a model’s functionality and value. The report reveals that these weights are not as secure as one might think, posing a threat not just from individual hackers but also from nation-states that could leverage AI for strategic purposes. Lahav and Nevo emphasize the need for robust cybersecurity measures, stating that 80-90% of these are well-known, yet under-implemented in AI contexts.

The conversation also touches on the broader implications of securing AI assets. Both experts warn that without proper protection, proprietary AI models could end up for sale on the dark web, potentially leading to the misuse of AI, such as in developing biological weapons. The report suggests over 150 different security measures, highlighting the need for a collaborative approach between data scientists and security experts to safeguard these valuable assets. They also suggest that government support might be necessary to protect these strategic resources effectively, given the complexity and sophistication of potential attackers.