security, MLSecOps, AI, AI security, cybersecurity

The United States and UK are heading up an effort to encourage organizations around the world to make security a priority when designing, developing and deploying AI systems.

The top security agencies from two countries were joined by counterparts in 16 other countries in signing an international agreement over the weekend to ensure that AI systems are “secure by design” and that risks are managed and minimized, from AI models leaking data to being vulnerable to threat actors.

The 20-page document, Guidelines for Secure AI System Development, is designed to give developers of AI systems and software guidelines for building in security throughout the development lifecycle as the pace of innovation around the advanced technology accelerates.

It’s particularly aimed at AI system providers who are using models hosted by organizations or external APIs, with the UK’s National Cyber Security Centre (NCSC) urging developers, data scientists, managers, decision makers and others to read the guidelines.

“AI systems have the potential to bring many benefits to society,” the NCSC wrote. “However, for the opportunities of AI to be fully realized, it must be developed, deployed and operated in a secure and responsible way. AI systems are subject to novel security vulnerabilities that need to be considered alongside standard cyber security threats.”

The agency added that “when the pace of development is high – as is the case with AI – security can often be a secondary consideration. Security must be a core requirement, not just in the development phase, but throughout the life cycle of the system.”

AI Is Not Only About ‘Cool Features’

“This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs,” Jen Easterly, director of the U.S. Cybersecurity and Infrastructure Security Agency (CISA), told Reuters, adding that the document is “an agreement that the most important thing that needs to be done at the design phase is security.”

CISA joined the NCSC in presenting the guidelines, which also were approved by the U.S. National Security Agency and FBI as well as agencies from Australia, New Zealand, Canada, South Korea, Nigeria, Israel, Chile, Japan and a number of European countries, like Italy, France, Germany, Norway, Estonia, Poland and the Czech Republic.

The United States and other countries in recent years have talked about the need to instill security into the AI development lifecycle, but the urgency behind it has ramped up with an explosion in the popularity of generative AI over the past year, with the spark being OpenAI’s release in November 2022 of ChatGPT.

Adoption of generative AI tools has skyrocketed, along with concerns that such AI models could inadvertently leak sensitive or corporate data or that bad actors could exploit vulnerabilities in AI software to take over systems or steal information.

A Focus on Security in the Design

The Biden Administration, as part of its larger focus on cybersecurity in federal agencies and the private sector, has urged that security be a priority in the development of AI. CISA has been a strong advocate of the software-by-design principle for all software development and in recent months has said the principle also applies to AI products.

“Discussions of artificial intelligence (AI) often swirl with mysticism regarding how an AI system functions,” Christine Lai, AI security lead, and Jonathan Spring, senior technical advisor, wrote in a blog post in August. “The reality is far more simple: AI is a type of software system. And like any software system, AI must be Secure by Design.”

CISA also has created a roadmap for AI development and use.

The security-by-default guidelines break down considerations and mitigations in four areas: Design, development, deployment and operation and maintenance. They cover everything from the need for threat modelling and supply-chain security to incident management processes, privacy protection, eliminating bias and fighting disinformation.

“Following ‘secure by design’ principles requires significant resources throughout a system’s life cycle,” the authors of the guidelines wrote. “It means developers must invest in prioritizing features, mechanisms, and implementation of tools that protect customers at each layer of the system design, and across all stages of the development life cycle. Doing this will prevent costly redesigns later, as well as safeguarding customers and their data in the near term.”

Taking Responsibility for Security

They should be used in conjunction with established cybersecurity and risk management practices, with the secure-by-design principles urging organizations and developers – rather than users – to take responsibility for security, being transparent and accountable and building a development structure where security is a business priority.

Toby Lewis, global head of threat analysis at cybersecurity AI provider Darktrace, said the new guidelines provide a “welcome blueprint” for safe and trustworthy AI.

“I’m glad to see the guidelines emphasize the need for AI providers to secure their data and models from attackers, and for AI users to apply the right AI for the right task,” Lewis told Techstrong.ai in an email. “Those building AI should go further and build trust by taking users on the journey of how their AI reaches its answers. With security and trust, we’ll realize the benefits of AI faster and for more people.”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Mobility Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY