
AuthZed, a provider of a platform for managing authorizations, this week added support for agentic artificial intelligence (AI) along with use cases requiring retrieval-augmented generation (RAG) platforms.
Company CEO Jake Moshenko said this extension to the SpiceDB authentication platform will enable organizations to centrally manage how AI applications and their associated agents are granted authorization to access data and other existing applications at scale. That’s critical because every data source an AI agent introduces comes with additional access rules that need to be managed, he added.
The AuthZed platform is based on Zanzibar, the permission system that Google developed to manage authorization across its cloud services. SpiceDB uses that foundation to manage trillions of access control lists and millions of authorization checks per second. Legacy approaches to authorization based on, for example, the Open Authorization (OAuth) protocol, simply will not be able to scale to the levels that will be required by AI agents, said Moshenko.
As organizations eventually find themselves assigning tasks to thousands of AI agents, the need for an authorization platform designed to manage access lists at that level of scale is about to become a lot more evident, he added. More challenging still, in many cases there will still be humans in the loop, which requires an authorization platform capable of dynamically recognizing both human and non-human identities, noted Moshenko.
Additionally, organizations will be able to define and restrict which tools or APIs an agent can access, allow AI agents to inherit user-level permissions, require specific approvals for sensitive actions and maintain complete audit logs.
Organizations will also be able to use SpiceDB to pre-filter documents based on user permissions before embedding, post-filter vector search results to exclude unauthorized content and synchronize permissions in real time with other applications.
Most organizations are just now starting to understand the scope of security challenges that AI agents create. Rather than being simply another type of non-human identity to be managed, AI agents will be able to perform a wide range of tasks. Limiting the scope of the data they can access by creating boundaries not only affects the behavior of those agents, it also limits the amount of data that might be stolen or encrypted should an AI agent be compromised by a cyberattack, said Moshenko.
Given the scope of the permissions that these AI agents are given, it’s only a matter of time before they become high-value targets, he added.
It’s not clear who within organizations will assume responsibility for securing AI agents. In many instances, what data they can access will initially be determined by the data science teams that build them. However, it will inevitably fall on cybersecurity teams to secure them and ensure that the blast radius of any potential breach is as limited as possible.
Those same cybersecurity teams will also need to ensure that guardrails are in place to ensure that if an AI agent is compromised it won’t be able to make requests of other agents without some type of approval process being in place.
Regardless of how those goals are achieved, there will inevitably be some type of breach. The challenge now is to find a way to limit the potential damage without unduly limiting the capabilities of the AI agents being deployed.