The United States, UK, and France have pledged that only humans – and not AI systems – would decide whether to deploy nuclear weapons.
Now the United States is pushing global rivals China and Russia to make the same promise.
Paul Dean, the State Department’s arms control official, told journalists during a briefing Thursday that the United States and its two allies had made a “clear and strong commitment” to ensure that humans have complete control over their countries’ nuclear arsenals and that “we would welcome a similar statement by China and the Russian Federation,” according to Reuters.
“We think it is an extremely important norm of responsible behavior and we think it is something that would be very welcome in a P5 context,” said Dean, principal deputy assistant secretary in the Bureau of Arms Control, Deterrence and Stability.
The P5 are the five permanent members of the United Nations Security Council – the United States, UK, France, Russia and China.
Whether China will make the commitment is unclear. The United States and China discussed nuclear weapons in November 2023, but according to a report this week, China rejected a U.S. invitation to continue arms talks.
U.S.-China Talks
Dean’s remarks come a week after Secretary of State Anthony Blinken traveled to China to meet with President Xi Jinping and other Chinese government officials about a broad array of issues, including advanced forms of AI and the risks and safety issues that come with them. Blinken in State Department readouts and interviews didn’t give many details about the discussion, but did say that in the coming weeks, the two countries will hold their first talks about AI.
The meetings came amid ongoing global tensions between the two countries, from the wars in Ukraine and the Middle East to competitive economic practices to cybersecurity issues. On the technology front, Blinken and Chinese officials also talked about the United States banning the sale of U.S. advanced technologies, such as the latest processors, to Chinese tech companies for fear that they could be used later against the United States.
“One of the things that we’re very focused on is making sure that when it comes to the highest end technology, we’re not in the business of providing or selling things that could be turned against us to undermine our own security,” Blinken told NPR in an interview. “What we’ve done, as we’ve said, is to work to build a very high fence around a very small yard, because this is not about decoupling our economies, it’s not about cutting off trade and investment with China – things that are beneficial to us and beneficial to them as long as it’s done fairly. … When it comes to sensitive technology, we’re going to be very, very sure that the most sensitive technology does not wind up where it could turn around and hurt us.”
AI and Nuclear Weapons
The issue of AI’s role with nuclear weapons has been a concern for years but appears to have picked up steam with the explosive innovation and adoption around generative AI over the past year. In April 2023, U.S. Senator Edward Markey (D-MA) and Representatives Ted Lieu (D-CA), Don Beyer (D-VA), and Ken Buck (R-CO) introduced the Block Nuclear Launch by Autonomous Artificial Intelligence Act.
In its 2022 Nuclear Posture Review, the Defense Department (DoD) said its policy is to ensure that a human is kept “in the loop” for any actions that involve the use of nuclear weapons. The Congressional act would essentially codify the DoD’s policy, according to the lawmakers.
“AI technology is developing at an extremely rapid pace,” Lieu said in a statement. “While we all try to grapple with the pace at which AI is accelerating, the future of AI and its role in society remains unclear. … [The act] will ensure that no matter what happens in the future, a human being has control over the employment of a nuclear weapon – not a robot. AI can never be a substitute for human judgment when it comes to launching nuclear weapons.”
Last month, Beyer, a member of the House AI Task Force, said at the group’s first meeting that the first action it should take is passing the bill.
Escalation Worries
Such a bill is important in light of a report in January by researchers from Northeastern University, Georgia Institute of Technology, Stanford University, and the Hoover Wargaming and Crisis Simulation Initiative, who ran international conflict simulations using large language models from OpenAI (GPT-4, GPT-3.5, GPT-4-Base), Meta (Llama-2-Chat) and Anthropic (Claude 2.0).
What they found was that some models tended to escalate quickly into using nuclear weapons.
“We show that having LLM-based agents making decisions autonomously in high-stakes contexts, such as military and foreign-policy settings, can cause the agents to take escalatory actions,” the researchers wrote. “Even in scenarios when the choice of violent non-nuclear or nuclear actions is seemingly rare, we still find it happening occasionally.”
Not everyone is on board with the push to take AI out of military decision-making processes. In a column in NATO Review, Steffan Puwal, a special lecturer in physics and adjunct assistant professor of medical physics at Oakland University in Rochester, Michigan, wrote that the rush to eliminate AI from nuclear defenses comes from a misunderstanding of technology.
“AI systems offer an opportunity to strengthen nuclear deterrence by providing a more accurate and capable defensive nuclear response,” Puwal wrote. “The purpose of making nuclear weapons more accurate and capable is not to promote their usage. Such capabilities, instead, provide a more credible deterrence to nuclear war and are consistent with classic nuclear doctrine. AI is simply a strategic tool, like nuclear weapons themselves.”
The use of AI should “serve a strategic objective,” he wrote, adding that “concern over AI should not preclude the use of AI in strengthening nuclear deterrence. Nor should AI be deployed in those systems simply for the sake of deployment.”