An analysis of cloud computing platforms running artificial intelligence (AI) workloads finds 62% of organizations have deployed a package with at least one known vulnerability.
On the plus side, most of the vulnerabilities present are of low to medium risk, with an average common vulnerability score of 6.9 and only 0.2% of these vulnerabilities have a public exploit.
Nevertheless, as more AI workloads are deployed many cybersecurity issues are being overlooked, says Orca Security CEO Gil Geron.
For example, 45% of Amazon SageMaker buckets are easily discoverable non-randomized default bucket names, and 98% of organizations using the service have not disabled default root access.
Similarly, 98% of organizations using the Google Vertex AI service have not enabled encryption at rest.
Overall, the analysis finds 56% of the organizations Orca Security analyzed have adopted their own AI models to build custom applications in addition to providing integrations specific to their environments. Azure OpenAI is currently the most widely used, while Sckit-learn is the most commonly deployed AI package (43%). GPT-3.5 is the most popular AI model (79%), according to the report.
The challenge is that many of the data science teams that are configuring and deploying AI models lack cybersecurity expertise, notes Geron. They don’t realize that many of the same vulnerabilities in the tools being used to create AI models wind up in the AI models that were created using those tools, he adds.
In addition, much of the sensitive data used to both initially train and then customize those AI models isn’t encrypted, notes Geron.
Orca Security to help address that issue has launched an open source environment that provides organizations with a means to gain hands-on experience defending AI application environments. Dubbed AI Goat, the first instance of that service is based on the Amazon SageMaker platform. It provides access to an intentionally vulnerable AI environment built in Terraform that includes numerous threats and vulnerabilities for testing and learning purposes that were created using the top ten machine learning risks identified by the Open Worldwide Application Security Project (OWASP).
For example, software engineers involved in AI projects can learn how to avoid misconfigurations that cybercriminals might later exploit. Alternatively, AI Goat will also show how secrets might be inadvertently exposed in a way that is easy to discover.
Unfortunately, AI applications are only the latest innovation where cybersecurity has once again become an afterthought, notes Geron. In many ways, the same cybersecurity issues that arose when workloads were initially deployed on the cloud are now being raised in the age of AI, he adds. The challenge, as always, is AI in the cloud is dependent on a shared responsibility for cybersecurity that many data science teams have yet to fully appreciate, says Geron. “There’s going to be a lot of training needed,” he adds.
Of course, none of that will slow down the pace at which AI workloads that are being developed and deployed, but it also all but guarantees that it’s now only a matter of time before the number of cybersecurity incidents involving AI models starts to substantially increase.