At the Kubecon + CloudNativeCon Europe 2025 conference last week, Solo.io added support for the model context protocol (MCP) for integrating artificial intelligence (AI) platforms and agents to an open source application programming interface (API) gateway for Kubernetes clusters, dubbed kgateway.

In addition, Solo.io made good on a previous promise to donate kgateway to the Cloud Native Computing Foundation (CNCF) that oversees the development of multiple open source projects, including Kubernetes.

The MCP gateway for kgateway provides organizations with an open-source alternative to integrate AI platforms and agents at a time when organizations are looking to leverage multiple models to automate processes on an end-to-end basis.

Keith Babo, chief product officer for Solo.io., told conference attendees that there will now be an open source approach to federating AI tools through a single endpoint.

Kubernetes is rapidly emerging as a de facto standard for building AI applications. At the same time, companies like Domino’s Pizza, ParkMobile and Vonage already use kgateway in production for API traffic management. The MCP Gateway will make it simpler to extend the reach of kgateway into AI applications, noted Babo.

MCP was originally developed by Anthropic and is rapidly being adopted by organizations to create virtual MCP servers that integrate AI models and agents. The challenge is integrating MCP with the existing tools that organizations rely on to invoke and secure APIs. The MCP Gateway addresses that issue by making use of multiplexing to eliminate the need to manually connect these agents and tools by consolidating MCP servers into one place, which can then be used to automatically discover, register and secure them.

The MCP Gateway itself is based on AutoGen, an open-source event-driven framework for integrating AI agents that was originally developed by Microsoft.

It also provides a centralized registry for MCP tools that an organization might have deployed across a heterogeneous computing environment. Additionally, IT teams will be able to better observe the behavior of AI agents and models.

It’s not clear how widely adopted MCP has become, but as it continues to gain traction, it provides a method for ensuring interoperability between AI models and agents. It also, if necessary, switches one out for another.

The overall goal is to prevent organizations from being locked into any set of AI agents. That’s critical in an era where the advances in reasoning capabilities embedded into large language models (LLMs) continue to be made rapidly.

Longer term, most organizations will be adding or replacing AI agents as the reasoning capabilities in an LLM model increase. The challenges will be building and deploying AI agents in a way that enables them to invoke multiple LLMs as needed, an issue that MCP protocol is specifically designed to enable.

It’s still early days so far as the building and deploying of AI agents is concerned but the one thing that is apparent is there is already a pressing need to address interoperability issues as the number of AI agents employed continues to exponentially increase.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Next Gen HPE ProLiant Compute Deep Dive

TECHSTRONG AI PODCAST

SHARE THIS STORY