Codeglide.ai, an arm of Opsera, today launched a lifecycle management platform for Model Context Protocol (MCP) servers that are increasingly being deployed to provide artificial intelligence (AI) agents and applications access to data.

Originally developed by Anthropic, MCP is emerging as a de-facto application-layer protocol, carried by HTTP/S that is invoked via an application programming interface (API).

Organizations are now creating and deploying MCP servers based on that protocol to provide AI applications and agents with a standard mechanism to access specific sets of data, a process that the Codeglide.ai software-as-a-service (SaaS) platform will manage using a familiar set of GitOps-based workflows, says Opsera CEO Kumar Chivukula.

The Codeglide.ai platform monitors changes to APIs, automatically generates secure MCP servers, and keeps them synchronized to ensure AI agents and large language models (LLMs) have continuous access to data. The platform also makes use of Swagger, a framework for designing APIs to provide AI agents with access to data using a set of processes that are easier to document, adds Chivukula. As a result, teams can more consistently roll out new endpoints, retire old ones, or update schemas without disrupting workflows, he notes.

Opsera encountered the need for a lifecycle management platform for MCP servers when it built one for its DevOps platform. However, given the fact that MCP servers are often built and deployed by data engineering and scientist teams building AI applications, Opsera decided to set up a subsidiary specifically chartered to bring a lifecycle management platform to market that can also be integrated with, for example, DevOps workflows based on GitHub Actions, says Chivukula.

It’s not clear how many MCP servers any organization might need or how frequently they are likely to be updated but many organizations already manage thousands of APIs. Rather than requiring AI agents to invoke each of those APIs, an MCP server makes it simpler to aggregate calls to data at a higher level of abstraction. As such, the number of MCP servers being deployed should exponentially increase, especially as more AI agents are deployed across the enterprise, adds Chivukula.

There are, of course, a range of other MCP lifecycle management issues that Codeglide.ai plans to address, including governance and security, but in the meantime many organizations are looking to deploy MCP servers as quickly as possible, notes Chivukula. As pressure to operationalize AI mounts, the need to streamline access to data becomes a more pressing concern, he adds.

Ultimately, each organization will need to determine how best to build and deploy MCP servers but the only thing, arguably, worse than not having any MCP servers may be not having any way to effectively manage too many of them.

In the meantime, as AI continues to evolve, IT teams will need to eventually decide which teams within their ranks should be assigned responsibility for managing all the MCP servers that will soon be deployed in their production environments.