Traceable AI via an early access program is previewing a set of extensions to its core platform designed to protect the application programming interfaces being used to access generative artificial intelligence (AI) platforms.
These extensions to the Traceable AI platform will prevent those APIs from being manipulated in a way that leads to the leakage of sensitive data in addition to thwarting prompt injection attacks launched via an API, says Allison Averill, director of product marketing for Traceable AI.
A dedicated dashboard provides insights into the security posture of generative AI APIs within their applications. In addition to preventing data leaks, Traceable AI will provide tools for discovering and monitoring in real time the APIs being used to access the large language models (LLMs) that are being employed by an organization.
Finally, Traceable AI will provide testing tools to make it easier to identify API vulnerabilities that might be exploited.
Cybercriminals have long exploited APIs to, for example, surreptitiously exfiltrate data. Now those same tactics and techniques are being used to exploit LLMs. “Those APIs are easy to manipulate,” notes Averill.
Traceable AI is addressing generative AI security issues outlined by Open Worldwide Application Security Project (OWASP), an online community of cybersecurity professionals that has defined similar frameworks for securing various types of IT platforms. The challenge when it comes to generative AI platforms is, given the rapid pace of adoption, many organizations have yet to address securing the APIs used to access them. It’s feasible, for example, for an entire process being automated using a generative AI platform to be hijacked via the APIs being exposed to the Internet, notes Averill.
Traceable AI is able to monitor that activity using a platform that takes advantage of the extended Berkley Packet Filtering (eBPF) subsystem in the latest releases of the Linux operating system to observe an application environment. That approach enables its OmniTrace Engine to reduce the need to rely on distributing agent software across an entire IT environment to track usage of APIs.
It’s not clear to what degree the APIs being used to access LLMs might be already compromised, but rather if it’s not more a matter of when and to what degree. The challenge when it comes to APIs is that cybersecurity teams often assume the developers of those APIs are responsible for securing them. The developers and data science teams that create those APIs, however, often lack the cybersecurity expertise required to effectively secure them.
Unfortunately, it will likely take a major cybersecurity incident involving the APIs used to access LLMs to focus attention on the scope of the threat. As organizations increasingly operationalize AI, the number of those APIs that might be exploited is also only going to increase, so the probability there will be an incident is only going to increase as more LLMs are invoked.
In the meantime, hopefully more data science and developers will have at least consulted with cybersecurity professionals to better protect those APIs.