
Google recently announced the release of a conceptual framework, the Secure AI Framework (SAIF), comprised of six core elements designed to help collaboratively secure AI technology. The tech giant argues a framework across the public and private sectors is essential for making sure that responsible actors safeguard the technology that supports AI advancements so that when AI models are implemented, they’re secure-by-default.
Its new framework concept is an important step in that direction, the tech giant claimed. The SAIF is designed to help mitigate risks specific to AI systems like model theft, poisoning of training data, malicious inputs through prompt injection, and the extraction of confidential information in training data.
“As AI capabilities become increasingly integrated into products across the world, adhering to a bold and responsible framework will be even more critical,” shared Google in a blog post announcing SAIF.
The announcement follows Microsoft’s recently released blueprint for AI governance, “Governing AI: A Blueprint for the Future”, which is based off five guidelines governments should consider when developing policies, laws and regulations for AI.
Sounil Yu, chief information security officer at JupiterOne, says as the original creator of transformers, Google is well positioned to speak on the concerns associated with the safe use of AI technologies.
“The SAIF is a great start, anchoring on several tenets that are found in the NIST Cybersecurity Framework and ISO 27001,” he says. “What is needed next is the bridge between our current security controls and those that are needed specifically for AI systems.”
He points out many of the challenges that are presented in the SAIF have patterns that are like threats against traditional systems.
“To ensure rapid adoption of the SAIF, we will need to find ways to adapt existing tools and processes to fit the emerging needs instead of having to implement something entirely new,” Yu says. “The primary difference with AI systems that makes the SAIF particularly compelling and necessary is that with AI systems, we won’t have many opportunities to make mistakes.”
Joe Murphy, technological evangelist for DeepBrain AI, points out Microsoft’s announcement was much more comprehensive than Google’s framework.
“Google’s SAIF, with its focus on cybersecurity and potential bad actors, is important, but it is really just one small piece of the larger guidelines that Microsoft proposed,” he explains.
He says SAIF “nicely lives” between Microsoft’s guideline on point 3, which focuses on the development of a broader legal and regulatory framework and point 4, which promotes transparency.
“Even though everyone is trying to move forward with ‘eyes wide open’, there is a strong sense that we don’t know what we don’t know,” Murphy adds. “Both Microsoft and Google are leveraging existing best practices and lessons learned.”
However, he notes there will be new lessons that have not been anticipated and unintended consequences, pointing out Microsoft broaches this topic with its second point, “Require safety brakes…”.
“The risk of catastrophic unintended consequences is somewhat nullified by the feeling that we have already opened Pandora’s Box, and turning back is no longer an option,” he says. “Some people compare it to the early development of nuclear weapons in WWII.”
Piyush Pandey, CEO at Pathlock, notes the risks that the SAIF is hoping to mitigate have strong similarities to those that IT, audit and security teams face when protecting business applications: Data extraction, malicious inputs and sensitive access, to name a few.
“Just as Sarbanes-Oxley legislation created a need for separation-of-duties controls for financial processes, it’s evident that similar types of controls are necessary for AI systems,” he says.
Sarbanes-Oxley (SOX) requirements were quickly applied to business applications executing those processes, and as a result, controls testing is now its own industry, with software solutions and audit and consulting firms, helping customers prove the efficacy and compliance of their controls.
“For SAIF to become relevant–and utilized–controls will need to be defined to give organizations a starting point to help them better secure their AI systems and processes,” Pandey says.
For business leaders looking to use SAIF as a springboard to initiate their AI governance program, Pandey says they should heavily lean on their IT, audit and security teams for best practices and ways to define and enforce access controls.
Yu says AI safety is an extremely important principle to consider at the earliest stages of designing and developing AI systems because of potential catastrophic and irreversible outcomes.
“As AI systems grow more competent, they may perform actions not aligned with human values,” he says. “Incorporating safety principles early on can help ensure that AI systems are better aligned with human values and prevent potential misuse of these technologies.”
From his perspective, having a robust safety framework with corresponding measures from the start can make AI systems more trustworthy and dependable. Murphy points out humans can develop frighteningly powerful technologies, but once the technology race (nuclear or AI) has started, it is important to stay with the lead pack.
When it comes to initiatives for secure AI development, Murphy says Silicon Valley heavyweights like Google and Microsoft working together is a good start.
“Companies and governments around the world need to work together, much in the same way we regulate nuclear development with the International Atomic Energy Agency,” he says. “To quote Oppenheimer, ‘Our work has changed the conditions in which men live, but the use made of these changes is the problem of governments, not of scientists.'”