AI News

Orca Security has launched an open source environment that provides organizations with a means to gain hands-on experience defending artificial intelligence (AI) application environments.

Managed by the Orca Research Pod team, the first instance of the environment, dubbed AI Goat, is based on the Amazon SageMaker platform provided by Amazon Web Services (AWS) that can be found on the Orca Research GitHub repository.

It provides access to an intentionally vulnerable AI environment built in Terraform that includes numerous threats and vulnerabilities for testing and learning purposes that were created using the top ten machine learning risks identified by the Open Worldwide Application Security Project (OWASP). Over time, Orca Security plans to add support for additional AI platforms.

The environment is designed not just for cybersecurity professionals but also anyone involved in AI projects that needs to master best practices to ensure security, says Shir Sadon, a security researcher for Orca Security. “It’s really for AI engineers, in general,” she says.

For example, software engineers involved in AI projects can learn how to avoid misconfigurations that cybercriminals might later exploit. Alternatively, AI Goat will also show how secrets might be inadvertently exposed in a way that is easy to discover.

Many of the tactics and techniques being used by cybercriminals to compromise AI environments are not necessarily new. However, many of the IT professionals now working on these projects have limited cybersecurity experience. The challenge is that the AI models being deployed in production environments are high-value targets.

The most immediate concern are phishing attacks through which the credentials of anyone involved in the training of an AI model might be compromised. Those individuals span everyone from the data scientists who build the model to contractors hired to reinforce training. These attacks make it possible to, for example, poison an AI model by exposing it to inaccurate data that would increase its tendency to hallucinate in a way that creates an output that might be anywhere from slightly off to completely absurd.

The opportunity to poison an AI model not only occurs before it is deployed but afterward when, for example, an organization starts using vector databases to extend the capabilities of a large language model (LLM) at the core of a generative AI platform by exposing it to additional data. Access must be secured whether an LLM is being extended, customized or built from the ground up.
AI teams also need to ensure no sensitive data might be inadvertently used at some future date to train an LLM without explicit permission.

Additionally, is important to remember that AI models are, at their core, another type of software artifact that is subject to the same vulnerability issues that plague other applications. The malware that cybercriminals inject into software repositories can quickly find its way into an AI model. The issue is that once discovered, the cost of rebuilding an AI model is several orders of magnitude more expensive than patching an application.

Finally, the AI model itself is one of the most important assets any organization might have. Cybercriminals are not only looking to gain access to these models; in many cases they will want to steal a copy of the entire AI model.
The challenge, of course, is that all it takes for cybercriminals to achieve any of these goals is one weak link to wreck untold havoc.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Mobility Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY