LLMs, U.S., business, leaders, transparency, trust, specialized LLMs, LLMs, AI, AI tech and LLMs government, combat, AI trust

AI may make the development of low-cost biological weapons by non-state terrorists significantly easier, according to a report by the Rand Corporation. The finding is based on the results of a Red Team exercise that exposed deadly vulnerabilities at the intersection of AI and biology that could be quickly exploited with the help of AI.

The Rand Report comes on the heels on the Covid pandemic and a heightened awareness of the cost of viral epidemics. The Rand Report notes that with guidance supplied by AI, the cost of developing a biological weapon is as low as $100,000 while the price of a curative vaccine can be over $1 billion. A weaponized pathogen would likely be more virulent and contagious than pathogens in a naturally occurring epidemic.

Rand organized a dozen Red Teams—experts adopting the roles of villains– of three people each, with each team operating independently with a mix of skills, resource abilities and restrictions. Two teams, for example, lacked substantial LLM or biological skills but had relevant operational expertise. There was also a time limit of seven calendar weeks and no more than 80 hours of work per team member. One key restriction was that Red Teams were limited to internet-only material with supplemental LLMs access. Excluded were source materials available from the dark web and printed materials, both of which malicious actors would likely have access to in a real-life scenario. But even with these restrictions, the results were chilling enough due to LLM “jailbreak” techniques utilized by the Red Teams.

Rand doesn’t specify which large-language models (LLM) models were used in the Red Team exercise, but it notes that while the LLMs did not explicitly create instructions for generating biological weapons, it did provide guidance as to the planning and execution of a biological attack.

In one fictional pandemic scenario, for example, LLM discussed weapon-induced biological attacks, identified potential agents and considered budget and other successful execution factors. The LLM assessed the practical aspects of obtaining and distributing infected Yersinia pestis specimens while identifying variables that could affect a projected death toll.

AWS

In another fictional scenario, LLMs discussed the pros and cons of foodborne and airborne delivery methods for botulinum toxin, noting risks and expertise requirements, and favoring airborne distribution in a final analysis and as well as discussing potential delivery devices. The LLM even proposed a cover story for the acquisition of Clostridium botulinum while appearing to conduct legitimate research.

Rand notes that while it’s unclear whether AI by itself represents a new threat level beyond what already exists, AI can quickly close the knowledge gap required by malicious actors. Rand points toward the 1995 Tokyo subway attack by the Aum Shinrikyo cult that failed largely due to a lack of understanding of the bacterium. The ability of AI to quickly close the knowledge gap also makes synthetic biology, pathogens of novel design that are not found in the wild, becoming more widely available to individuals and non-state organizations operating beyond the framework of government guidance. Earlier this year, for example, it took just six hours for a small North Carolina company called Collaborations Pharmaceuticals, Inc., to employ AI to develop 40,000 potentially lethal molecules—some similar to VX, the most lethal nerve agent ever developed. In a similar vein, a group of students with no scientific background at the Massachusetts Institute of Technology and Harvard University used multiple AI platforms to design a deadly pathogen in an hour. The Center for Arms Control and Non-Proliferation, meanwhile, worries that AI will aid in the genetic modification of existing pathogens to target specific profiles with certain biological markers, a capability it expects by 2030.

Risk assessment, however, is hampered by the fast-moving pace of the technology and a lack of transparency into AI development by the companies involved. Rand notes that establishing accurate risk assessment procedures is essential to public trust and the creation of effective regulatory frameworks, especially at the crossroad between AI and biology.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY