cybersecurity, identity management, OWASP

The Open Worldwide Application Security (OWASP) foundation has expanded a project for securing applications built using large language models (LLMs) that in addition to identifying evolving threats will now include a wide range of best practice guidance.

The overall goal is to make it simpler for data scientists, data engineers, application developers and cybersecurity specialists to have a better common understanding of the cyberthreats that artificial intelligence (AI) applications are already facing, says Scott Clinton, co-project lead for The OWASP Top 10 for LLM Application Security Project.

The OWASP Top 10 for LLM Application Security Project already has 550 contributors from 110 companies and continues to expand. OWASP plans to update its Top 10 list for LLM risks and mitigations twice a year starting in 2025. Unlike threats to existing applications, the LLM threat landscape is evolving more rapidly as more research is conducted, notes Clinton.

In addition to defining the scope of the generative AI security technology landscape, OWASP is providing advice on how to build an AI Security Center of Excellence along with now a guide for handling deep fakes. Existing project initiatives and working groups already address risk and exploit data mapping, LLM AI cyber threat intelligence, Secure AI adoption and AI red teaming and evaluation. Other guides will be added or updated as needed with the help of volunteers, says Clinton.

OWASP is looking for additional volunteers to join various working groups that are, for example, defining a set of best practices for responding to an AI security incident, he adds.

In many regards, AI security is following the same trajectory that led to the development of a set of best DevSecOps practices that many software engineering teams have adopted to better secure application environments. The challenge is that many of the data scientists that have built AI models know even less about cybersecurity than the average application developer. “It’s a Déjà vu,” says Clinton.

Unfortunately, cybersecurity teams are once again playing catch up with an emerging technology that is starting to be wide deployed without fully considering all the cybersecurity implications. Organizations, as a result, are just now starting to understand the need for a gateway that, for example, ensures that sensitive data isn’t inadvertently shared with an LLM.

The issue is there is already a chronic shortage of cybersecurity expertise, and the number of them that have AI expertise is limited. Coupled with a lack of cybersecurity awareness among the teams building and deploying AI models, it quickly becomes apparent that it’s only a matter of time before organizations experience a breach. In fact, cybercriminals are already compromising data they know will be used to train an LLM in the hopes of being able to bypass whatever security guardrails might have been put in place, also known as LLM jailbreaking.

Ultimately, it’s now not a question of whether there will be a security breach involving AI models so much as it is to what degree can it be contained once it inevitably occurs.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Qlik Tech Field Day Showcase

TECHSTRONG AI PODCAST

SHARE THIS STORY