
This fight club is no gritty underground spectacle cloaked in a code of silence. Lockheed Martin’s AI Fight Club is high-tech war-gaming, with the goal of accelerating the development and operational deployment of artificial intelligence in the nation’s military capabilities.
The defense technology company is seeking teams from across industry and academia to bring their A-game and compete in a virtual arena designed to mimic the chaos of battle on land, sea, air and space. The AI Fight Club will use advanced simulation and visualization to evaluate AI in realistic, complex, modern warfare scenarios.
“There has never been a more important time to prove which AI technologies are the best, to help the United States stay ahead of the threats facing our nation and allies,” said John Clark, Lockheed Martin’s senior vice president of technology and strategic innovation. “Teams with the best AI will battle each other to determine the most robust, reliable and effective models.”
The goal, according to Lockheed Martin, is to bring together teams and government experts “to collaborate in a unique environment that fosters innovation, identifies vulnerabilities, and informs the development of more advanced AI systems to benefit American national security as a whole.”
The competition is expected to begin in the fourth quarter of 2025. Before then, teams will train their models using Lockheed’s proprietary synthetic environment, which simulates complex battle scenarios. They’ll receive a participant guide detailing the challenge scenario, scoring, evaluation criteria and interface requirements. Each challenge will focus on a specific military operation or mission area. In each bracketed matchup, two teams will plug their systems into the simulation, where their AI agents will operate in real-time environments reflecting modern military challenges. Scenarios include coastal defense against amphibious assaults, airborne operations in contested airspace, space-based surveillance and reconnaissance missions, and layered homeland defense.
The capabilities of the AI agents will be evaluated in each scenario, and teams will earn points based on evaluation metrics such as speed, accuracy, resilience and adaptability.
While AI has reshaped industries from finance to health care, its militarization faces intense scrutiny—primarily concern over whether humans can remain at the top of the decision-making process when that process involves such a high level of automation.
Each match will be monitored by government observers and technical judges, who will evaluate the AI’s behavior not just in terms of success but safety, robustness and clarity of decision-making. And because the scenarios run in a controlled simulation, organizers can watch, frame by frame, how the AI makes its calls.
“AI isn’t magic,” the company states on its website. “It doesn’t solve all our problems, and we have plenty of challenges ahead of us in transitioning AI capabilities to our end users. But, when problems are beyond human scale to comprehend based on volume of data, complexity or urgency, AI can help illuminate solution opportunities.”
The Washington, D.C.-based Center for Strategic and International Studies (CSIS), a “bipartisan, nonprofit policy research organization dedicated to advancing practical ideas to address the world’s greatest challenges,” noted in a commentary that war-gaming can provide insight into the outcome of scenarios but should not be considered flawless. The article was authored by Benjamin Jensen, a senior fellow in the Futures Lab at CSIS; Yasir Atalan, an associate data fellow in the lab; and Dan Tadross, head of federal delivery at Scale AI.
“Whether classified or unclassified, wargames are a form of synthetic data,” they wrote. “They are based on scenarios, which even when backed by extensive research and sensitive intelligence, are still approximations of reality. Games cannot predict the future—but then again, neither can most analysts. What games can do is highlight tradeoffs and provide a forum for analyzing decision-making.”
In 2023, the Department of Defense created Task Force Lima to develop, evaluate, recommend and monitor generative AI capabilities—particularly large language models (LLMs)—across the department. A 2024 national security memorandum directs the national security community to “utilize advanced AI technologies, with proper safeguards in place, to meet national security goals,” and highlights the critical role of LLMs in maintaining the U.S. advantage during a “transformational shift in the AI landscape.”
Among its summary of recommendations, Task Force Lima advised the DOD to embrace the type of experimentation and studies currently underway by organizations such as Lockheed Martin: “Work with industry and academia to improve and appropriately harness GenAI. GenAI techniques are still rapidly being developed in the commercial sector and with the help of academia. This development is already beginning to address some of the known shortfalls of GenAI, which impede its broad application, such as hallucinations. DoD’s strategy should be to leverage commercial solutions and talent in this rapidly changing field and focus any research and development efforts on instances where commercial solutions are insufficient for military tasks.”
Lockheed Martin, Northrop Grumman, MIT, Purdue University, and several other companies and academic institutions, have worked with the Defense Advanced Research Projects Agency, or DARPA, to develop wargaming platforms. When the “Gamebreaker” project was launched by DARPA on May 5, 2020, Erin Cherry, formerly a program manager for Northrop Grumman, said, “Wargaming with game technology and AI, not to be confused with traditional tabletop exercises, really allows us to rapidly explore strategies and optimize for tactics selection, fleet composition and mission outcomes. This is incredibly important if our military shifts to a large number of smaller platforms. That would change the way that we fight, and this type of wargaming will help us explore strategies, compositions and orders of battle.”
Ms. Cherry is now the AI/Autonomy Senior Advisor and Project Manager to the Office of the Under Secretary of Defense for Research and Engineering.