world map (leandro-barreto-nomAp6_KVXU-unsplash)

FBI Director Christopher Wray called it “the defining threat of our generation.” Volt Typhoon, identified by national security experts as a China-sponsored group, focused on pre-positioning themselves within U.S. critical infrastructure IT networks to launch cyberattacks in the event of a major crisis or conflict with the U.S.

A few years earlier, a digital intruder increased sodium hydroxide to dangerous levels in the Oldsmar, Fla., water supply in an attempt to poison the local population within 24 to 36 hours. The plan was foiled when an employee of a water treatment plant noticed the rising levels and reported the problem.

Water systems, utilities, the electrical grid, financial systems, railways and transportation — they’re all vulnerable to an asymmetrical attack from a nation-state that can’t compete with overwhelming U.S. armed forces. “Critical infrastructure is increasingly becoming a target,” Ayal Yogev, co-founder and CEO at Anjuna Security, said in an interview. “It has become a battlefield between the U.S. and nation-states. Essentially, it is asymmetrical warfare vs. a large and mighty conventional American military.”

In recent years, foreign adversaries have increasingly leveraged generative AI to engage in digital warfare. Operatives from Iran (motivated by ransomware and financial gain), Russia (information warfare), and China (pre-emptive exploits) pose “extremely high” risk to critical infrastructure, warns RunSafe Security CEO Joe Saunders.

“We are dealing with incredibly capable AI threats in countries not bound by regulation or guardrails” like China, NetRise CEO Thomas Pace said in an interview. “We need to concentrate on the best technology and innovation, period.”

The evolving threat prompted the Department of Homeland Security (DHS) this month to implore 16 U.S. critical infrastructure sectors to adopt practices aimed at ensuring the safe, secure use of AI. The framework applies to all levels of the supply chain, including cloud and compute firms, AI developers and consumers.

Titled “Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure,” the digital blueprint offers a detailed follow-up report from an AI executive memo issued by the Biden administration in October. And, some experts assert, it provides an urgently needed, productive structure to prevent risks, offering clear, actionable guidance to stakeholders across the AI supply chain.

Though highly lauded, the framework’s fate is still at the mercy of the incoming Trump administration, whose AI policy remains opaque. Elements of the plan, however, have already been baked into government policy and are likely to survive a change in presidential leadership, security experts said.

“AI has enormous potential to enhance the reliability and efficiency of everything from energy grids and water systems to food supply chains, even as our population grows, and new threats emerge,” Joel Meyer, president of public sector at Domino Data Lab, said in an email. “But as we integrate AI into these essential services, we must confront reality: Unreliable AI can lead to catastrophic failures and create openings for adversarial exploitation.”

SUB: AI: A Friend or Foe?

The role of AI for those in critical infrastructure is both promising and vexing. The industry overwhelmingly agrees AI is a tool that can be extremely effective when created and used with standards and ethics. At the same time, adversaries are weaponizing the technology — as Iran did leading attacks on the water sector, China’s campaign with Volt Typhoon attempted to do, and Russia’s presumable actions in the war in Ukraine.

“AI: Friend or foe? It can be an incredible tool for good, and it can be an incredible tool for bad,” Kev Breen, senior director of cyber threat research at Immersive Labs, said in an interview. “If you don’t understand its principles properly, you can inflict great damage. It can be fully weaponized or invaluable as a defense mechanism. And, remember, attackers are not bound by regulation, resulting in ‘first mover advantage.'”

“To me, AI is just like metal. It is something we figured out can be used to build great things as well as bad things,” Chris Grove, director of cybersecurity strategy at Nozomi Networks, said in an interview. “For example, data can be used properly inhouse, with safeguards, or it can be loaded to a worm GPT to create sophisticated phishing email.”

“Every day there is something happening with infrastructure somewhere because of the turmoil in the world,” Grove said.

To a large degree, the integration of AI into critical infrastructure should be viewed as a partnership, combining the strengths of AI and human intelligence to create more resilient, efficient and safe systems, according to several experts.

“Involving humans in the AI decision-making process also ensures accountability,” Katie Arrington, vice president of government affairs at Exiger, said in an email. “When systems fail or make erroneous decisions, it is crucial to have a human responsible for understanding the implications and correcting the course.”

Former CIA and corporate cyber expert Ross Young, who is now chief information security officer in residence at Team8, believes GenAI offers some fundamental security solutions: Enhancing detection of fraudulent emails through advanced contextual analysis; automating incident response workflows beyond the capabilities of existing Security Orchestration, Automation and Response tools; and delivering tailored security guidance to developers for addressing complex IT challenges.

“GenAI requires attention on the operational technology side” through best practices such as mapping of data operations and self-assessing AI vulnerabilities, Ian Bramson, vice president of global industrial cybersecurity at consultancy Black & Veatch, said in an interview.

SUB: Absent Law, Academia and Private Enterprise Offer Solutions

The legislation front is a trickier proposition. Legislation is absent in the U.S., leaving Europe to show the way, as it typically has for tech.

The EU’s Digital Operational Resilience Act (DORA), which goes into effect in early 2025, “represents a forward-thinking approach to ensuring enterprises are prepared for the challenges of an increasingly digital and volatile geopolitical landscape,” Steven Dickens, CEO and principal analyst at HyperFRAME Research, said in an email. “Its well-crafted framework compels organizations to reevaluate their resilience strategies and adapt to a more interconnected yet fragmented world.”

With prospects for U.S. legislation not coming any day soon, private companies and academia are researching new ways to protect critical infrastructure.

Earlier this month, Project Zero, one of Google’s security research initiatives, managed to detect a zero-day memory safety vulnerability using LLM-assisted detection. “We believe this is the first public example of an AI agent finding a previously unknown exploitable memory-safety issue in widely used real-world software,” the team wrote in a post.

Meanwhile, a team at Georgia Tech’s School of Electrical and Computer Engineering has developed an algorithm, called PLCHound, that enhances critical infrastructure security by more accurately identifying devices vulnerable to remote cyberattacks. Using the algorithm’s data, backed by natural language processing and machine learning, the team was able to contact airports, hospitals and government offices, and inform them of trouble spots. Within a month, a scan showed 34% of the unsecured IP addresses they scanned worldwide were no longer exposing critical infrastructure devices.

Of course, as with most technologies, it often comes down to an arms race, say cybersecurity experts who cut their teeth in the federal government where they have seen various threats over the years.

“It’s a symmetric advantage,” said Stairwell CEO Mike Wiacek, who previously worked at NSA and the Department of Defense, before landing at Google, where he created one of the first corporate threat-intelligence groups. “Both sides will get better because of it and maintain the relative effectiveness of each. Giving defenders an unendurable asymmetric advantage is what’s going to be needed to have a material benefit.”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Qlik Tech Field Day Showcase

TECHSTRONG AI PODCAST

SHARE THIS STORY