security, MLSecOps, AI, AI security, cybersecurity

A group of the top vendors in the rapidly expanding AI space are joining with government agencies, industry experts and researchers in a consortium aimed at addressing the security, safety and risks that come with the technology.

The new AI Safety Initiative, being overseen by the Cloud Security Alliance (CSA) and unveiled last week, includes Microsoft, Google, Amazon, OpenAI and Anthropic, as well as the Cybersecurity and Infrastructure Security Agency (CISA) and National Security Agency (NSA).

In all, there are more than 1,500 experts that are part of the group, which is tasked with guidelines, templates and tools for deploying AI in a safe, ethical and responsible manner and in accordance with industry standards and the anticipated government regulations, according to the CSA.

It’s part of a larger push within the industry and government to have work around issues of security and risk in AI catch up with the rapid innovation of the technologies, particularly in generative AI in the wake of OpenAI’s release last year of its ChatGPT chatbot.

Generative AI a Focus

The CSA’s initiative initially will focus on the development and deployment of generative AI.

“Generative AI is reshaping our world, offering immense promise but also immense risks,” Caleb Sima, chair of the AI Safety Initiative, said in a statement. “Uniting to share knowledge and best practices is crucial.”

Project leaders already have created four working groups around AI technology and risk, governance and compliance, controls, and organizational responsibilities, and plan to talk about its progress at the CSA Virtual AI Summit next month and the CSA AI Summit at the RSA Conference in May in San Francisco.

“AI will be the most transformative technology of our lifetimes, bringing with it both tremendous promise and significant peril,” CISA Director Jen Easterly said in a statement. “Through collaborative partnerships like this, we can collectively reduce the risk of these technologies being misused by taking the steps necessary to educate and instill best practices when managing the full lifecycle of AI capabilities, ensuring – most importantly – that they are designed, developed, and deployed to be safe and secure.”

Other AI Security Efforts

CISA and other government agencies have been among the leaders pushing for safe development and use of AI. CISA in August said that AI technology – like other tech solutions – needs to be built with security and safety in mind, part of the agency’s larger “secure by design” emphasis.

The same month, the Defense Advanced Research Projects Agency (DARPA)  unveiled the AI Cyber Challenge to encourage cybersecurity and AI specialists to design ways to automatically detect and fix software vulnerabilities and protect critical infrastructure.

Seven high-profile companies – including Google, Microsoft, OpenAI, and Meta – in Septembers agreed to work with the White House to address risks posed by AI. In addition, Google, Microsoft, OpenAI, and Anthropic in July announced the Frontier Model Forum, an industry group looking at ways to ensure the safe and responsible development of frontier AI models, described by OpenAI as “highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety.”

The NSA in September unveiled the AI Security Center oversee the development and integration of AI capabilities in the country’s national security systems, including creating best practices, evaluation tools, and risk frameworks to ensure the safe adoption of AI technologies in national security sectors and the defense industrial base.

In October, President Biden issued an executive order to start establishing guidelines to ensure the security and safety of AI and protecting workers and consumers from negative effects of the technology. The executive order touches a range of federal agencies, from the Commerce and State departments to the National Institute of Standards and Technology (NIST), Federal Trade Commission, and the National Science Foundation.

On CSA’s Radar

The CSA began putting a stronger focus on AI earlier this year after the explosive adoption of ChatGPT.

“Generative AI is a powerful technology full of possibilities,” Jim Reavis, co-founder and CEO of the alliance, wrote in April.  “Let’s work very hard on developing a body of best practices to govern the usage of AI today and let the developers of AI use those experiences to improve the next generation. Maybe raising a good AI child will create a responsible AI adult.”

Marina Bregkou, senior research analyst at CSA, wrote in August that “AI technologies rely on vast amounts of data to train models and make decisions. That data itself, however, poses security risks. If sensitive information is mishandled or falls into the wrong hands, it could lead to privacy breaches, identity theft, and financial fraud, to name just a few risks. Such vulnerabilities of AI models highlight the importance of developing sturdy and secure AI systems that can withstand such manipulations.”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Tech Field Day Extra at Cisco Live EMEA

TECHSTRONG AI PODCAST

SHARE THIS STORY