
The Linux Foundation today added to its portfolio an open source gateway for artificial intelligence (AI) agents that in addition to providing connectivity also enables IT teams to observe and secure them.
Developed by Solo.io, the agentgateway project is a purpose-built gateway for AI agents that represents the third major AI agent project launched in the last two months by the Linux Foundation.
The other two are the Agent-to-Agent (A2A) protocol developed by Google that enables AI agents to communicate with one another, and the AGNTCY project, an open source project initially developed by Cisco that allows AI agents to find one another, communicate, collaborate and be managed across platforms, models, and organizations
The agentgateway is compatible with both A2A and the Model Context Protocol (MCP) server developed by Anthropic to provide AI agents with access to data they can use to extend their functionality.
Organizations lending their support to the agentgateway project include Amazon Web Services (AWS), Cisco, Huawei, IBM, Microsoft, Red Hat, Shell and Zayo.
Solo.io CEO Idit Levine said the agentgateway is the first data plane built from the ground up for AI agents that governs and secures communication across agent-to-agent, agent-to-tool and agent-to-large language model (LLM) interactions. The ability to govern and secure AI agent interactions fills a unique requirement that enterprise IT organizations will need to address as they increasingly operationalize AI agents, she added.
It’s not clear how much overlap there might be between the various agentic AI initiatives launched by the Linux Foundation, but hopefully as each project matures the level of collaboration between them will increase.
In the meantime, IT organizations will soon find themselves managing thousands of AI agents much like they currently manage the way humans interact with applications and IT infrastructure. Each AI agent will have its own unique identity that has been assigned a set of privileges for access to data and each other. Applying governance policies to those interactions will be critical because AI agents have already shown an inclination to incorporate any data they can reach into the workflows that have been assigned to them regardless of who owns that data or how sensitive it might be.
The challenge facing IT teams now is determining how best to develop and then enforce the policies that will be required to ensure AI agents don’t exceed that scope of the tasks they have been assigned to perform.
With few notable exceptions, most organizations are still experimenting with AI agents that are significantly more autonomous than the AI co-pilots that typically assisted humans to perform a task versus completing a task on their own. Exactly how much autonomy an AI agent can be given will naturally depend on the complexity of the task, but for the foreseeable future humans will need to review their work before incorporating into the output of a workflow. The concern is that AI agents, while flawlessly performing a task assigned to them multiple times over, still have the potential when left unsupervised to wreak havoc at levels of potentially unprecedented scale.