The heads of such AI companies as Microsoft, OpenAI, Amazon, Google parent Alphabet and Anthropic are among almost two dozen people selected to a board by U.S. Homeland Security Department (DHS) that will explore how AI can be used responsibly by critical infrastructure organizations and how those sectors can be protected against AI-related disruptions.
A broad array of interests in addition to AI development are represented on the board announced by Homeland Security Secretary Alejandro Mayorkas, including tech companies (such as IBM, Cisco, Nvidia and AMD), critical infrastructure companies (Delta Air Lines, Northrop Grumman and Occidental Petroleum), government officials (Seattle Mayor Bruce Harrell, Maryland Gov. Wes Moore and Arati Prabhakar, assistant to the president for science and technology and director of the White House Office of Science and Technology Policy) and civil rights centers such as (Center for Democracy and Technology, The Leadership Conference on Civil and Human Rights and Lawyers’ Committee for Civil Rights Under Law).
Academia also is represented, including Stanford University’s Human-centered Artificial Intelligence Institute.
The creation of the board came days before DHS and CISA today released guidelines to mitigate AI-based cyberthreats to critical infrastructure, including establishing a culture of AI risk management and creating a process for assessing, analyzing and tracking AI risks.
AI ‘Transformative,’ but Risky
Mayorkas in a statement noted that while “AI is transformative technology that can advance our national interests in unprecedented ways,” it also comes with risks that can be mitigated by using best practices and taking proactive steps. The new board will “help ensure our nation’s critical infrastructure – the vital services upon which Americans rely every day – and effectively guards against the risks and realizes the enormous potential of this transformative technology.”
The Biden Administration has identified 16 critical infrastructure sectors – touching on manufacturing, the defense industrial base, chemicals, energy, financial services, health care and transportation, among others – that need to be protected against cyberattacks, including those launched by nation-states such as China, Russia and Iran.
Such protection was a key part of President Biden’s 2021 executive order to improve the nation’s cybersecurity capabilities as well as his executive order in October 2023 about secure and responsible development and use of AI.
China, Russia are Concerns
A key concern is that an attack on a critical infrastructure sector could affect millions of Americans and create a national security issue for the United States, and there already are myriad examples of these industries being targeted by both nation-state and financially motivated threat groups. For example, water resources in such places as Pennsylvania last year and, more recently, Texas has been attacked and sectors like health care and financial services are heavily targeted.
In February, CISA, the FBI and other agencies issued a warning that China-backed threat group Volt Typhoon had compromised networks of critical infrastructure organizations, and hidden for as long as five years, essentially lying in wait in case conflicts between the United States and China arose and it could disrupt U.S. operations.
The rapid rise of generative AI and large language models (LLMs) only enhances such threats, leading to the creation this month of DHS’ Artificial Intelligence Safety and Security Board.
Part of Biden’s EO
Mayorkas created the board as part of his responsibility under President Biden’s secure AI executive order and with the goal of creating recommendations to both help critical infrastructure organizations more responsibly use AI technologies and to prepare for and prevent AI-related disruptions that could ripple throughout the country, according to DHS. In its Homeland Threat Assessment of 2024, the department noted that AI technologies “have the potential to enable larger scale, faster, efficient and more evasive cyber attacks – against targets, including pipelines, railways and other US critical infrastructure” – and that adversaries like China and Russia are developing “other AI technologies that could undermine U.S. cyber defenses, including generative AI programs that support malicious activity such as malware attacks.”
DHS last month also published its first AI roadmap that outlined how the department planned to responsibly use AI in its work while also protecting the privacy and civil rights of individuals and promoting AI safety and security around the country.
Critical Infrastructure’s Role in AI
In prepared comments, technology company executives spoke about the benefits of AI, with AMD Lisa Su saying that “the widespread use of AI has the potential to improve every aspect of our daily lives. It is critical that we work across the public and private sectors to adopt a collaborative and responsible approach that will ensure we harness the incredible power of AI for good.”
“AI technology is capable of offering immense benefits to society if deployed responsibly, which is why we’ve advocated for efforts to test the safety of frontier AI systems to mitigate potential risks,” said Dario Amodei, co-founder and CEO of Anthropic. “Safe AI deployment is paramount to securing infrastructure that powers American society, and we believe the formation of this board is a positive step forward in strengthening U.S. national security.”
Other board members talked about the role of critical infrastructure in ensuring the benefits of AI are widely felt. Humane Intelligence CEO Rumman Chowdhury said that “grappling with the implications of artificial intelligence on critical infrastructure is necessary to ensure equitable and tangible benefits of this technology to all Americans.”
For her part, Maya Wiley, president and CEO of The Leadership Conference on Civil and Human Rights, said that “critical infrastructure plays a key role ensuring everyone has equal access to information, goods and services. It also poses great threats, including the spread of bias and hate speech online, stoking fear, distrust, and hate in our communities of color.”