cloud, ai, MCP, agentic AI,

Agentic AI is radically transforming software, moving beyond simple text generation models. They can reason, plan and execute complex tasks by interacting with external systems and APIs. But for most organizations, this powerful potential remains just out of reach, because the most advanced AI models are often isolated from the very tools, data and internal systems where real business value is created, trapped behind a wall of incompatible APIs and proprietary connections. 

The Model Context Protocol, or MCP, is the standard designed to solve this problem. Think of it as a universal translator or a “USB-C for AI.” Itʼs an open standard that creates a common language, allowing AI agents to securely and seamlessly plug into the diverse landscape of enterprise applications. But adopting a new protocol — let alone implementation — is never as simple as flipping a switch. It essentially requires a clear understanding of both the architectural changes and the practical hurdles on the ground. 

Steve Rodda, CEO of Ambassador, an API development company with MCP-enabled product offerings, shares in our email interview that to truly harness the power of agentic AI, leaders must look past the protocol itself and focus on the deeper challenge of preparing their existing systems for this new, interconnected world. “The journey from todayʼs infrastructure to a truly AI-native future is complex,” he explains, “but it starts with understanding how to bridge the gap between our current code and the context these new models demand.” 

Why MCP is More Than Just Another API 

To appreciate the significance of MCP, itʼs important to see it as more than just an evolution of the APIs weʼve used for decades. A traditional REST API is designed for predictable, programmatic interactions where a developer defines a specific request and anticipates a specific response. Itʼs a rigid contract. An MCP server, however, operates on a different principle. It doesn’t just expose a function; it describes the functionʼs purpose, its inputs, and its potential outcomes in a way that an AI can understand and reason about. 

Rodda emphasizes that this shift from mere execution to contextual understanding is the key to unlocking autonomous work. He explains that “providing an AI with a well-described set of tools via MCP allows the model to move beyond simple commands. Instead of being told exactly what to do, the AI can independently select and chain together the right tools to achieve a complex goal, forming the basis of a truly agentic AI application. This capability is what allows an organization to build systems that can adapt and react to dynamic business needs without constant human intervention.” 

This ability to safely open up complex tools for AI consumption is already creating significant value, says Rahim Bhojani, CTO at Dremio. The protocol is fundamentally changing how organizations approach data access. “MCP lets organizations safely democratize data for AI—unlocking real-time copilots and assistants without compromising governance or security,” he explains. This highlights a clear path forward for enterprise leaders. Instead of building one-off integrations for every new AI feature, organizations should focus on creating secure, MCP-compliant “data services” that can be discovered and used by any number of AI agents, ensuring both innovation and control. 

The “Code-to-Context” Gap: The Real Challenge for the Enterprise 

While the promise of a standardized, AI-native ecosystem is compelling, a significant gap exists between this vision and the reality of most enterprise IT environments. The true difficulty doesnʼt lie in understanding the protocol itself, but in retrofitting decades of existing applications, data stores, and business logic to communicate through it. This is where organizations often encounter a hidden but critical obstacle: the “Code-to-Context” gap. 

Many emerging tools operate on a spec-to-MCP model, which assumes that a clean, comprehensive OpenAPI specification already exists for the tool you want to connect. But Rodda explains, this is rarely the case for the most valuable enterprise systems. The core logic and data access rules are often locked away in undocumented APIs or monolithic codebases. He argues that before an organization can even think about building an effective MCP server, it must first solve the foundational problem of translating its raw code and implicit business rules into a high-quality, AI-readable specification that can provide meaningful context. 

This architectural challenge is why simply adopting MCP connectors is not a silver bullet. Yuval Perlov, Chief Technology Officer at K2view, offers a pragmatic analysis of the underlying complexities that organizations must address. 

“MCP is a framework — not a complete solution. To unlock its enterprise potential, several critical data platform challenges must be addressed… These challenges highlight why customizable connectors, while powerful, must operate within a broader data architecture that addresses security, freshness, compliance, and contextual accuracy in real time. Only then can frameworks like MCP become true enablers of secure, scalable, and trusted enterprise AI.” 

This perspective serves as a crucial reality check for technical leaders. Itʼs essential to evaluate not just the API endpoint, but the entire data pipeline behind it, ensuring your architecture can support the real-time performance and granular governance that enterprise-grade AI demands. 

MCP in Action: Real-World Use Cases from Industry Leaders 

Despite the architectural hurdles, innovative organizations are already demonstrating the transformative potential of MCP, bridging the Code-to-Context gap and proving whatʼs possible when AI agents are safely connected to enterprise systems. These pioneering efforts tend to fall into three key areas: revolutionizing the developer and analyst experience, creating new frameworks for quality and governance, and unlocking core enterprise data. 

—Revolutionizing the Developer and Analyst Experience 

Perhaps the most immediate value from MCP comes from its ability to reduce friction in the daily workflows of technical teams. By embedding AI assistance directly into their primary tools, companies are seeing significant gains in productivity and speed. Sean Kruzel, Lead AI Engineer at Infactory, describes how they bring data access directly to where users already are. 

“By adding an MCP server to our data integrations, we enable Analysts to query their data directly from ChatGPT, Claude, and others. Developers can add Infactoryʼs MCP servers to Cursor and Windsurf to quickly build front-end applications which access their published slices of their data.” 

This approach suggests a powerful starting point for any organization. By identifying the most repetitive, high-friction tasks your developers and analysts perform, you can prototype a targeted MCP server that streamlines their work and delivers an immediate and visible return on investment. This is echoed by Mohith Shrivastava, Developer Advocate at Salesforce, who points to simplifying core DevOps tasks. 

“Have you ever wished you could just ask an AI to do something like deploy code — without having to type long commands or remember the commands? Thatʼs exactly what MCP can help with.” 

Simple, well-defined functions can become powerful MCP tools. Leaders should therefore encourage their teams to expose high-value internal scripts and commands as MCP-compliant tools to build momentum and prove the value of agentic automation. 

—Forging a New Frontier in Quality and Governance 

Beyond pure efficiency, MCP is also creating entirely new possibilities for ensuring the quality, safety, and compliance of AI systems themselves. As AI agents become more autonomous, traditional testing methods are no longer sufficient. Akash Agrawal, VP of DevOps and DevSecOps at LambdaTest, argues that a new paradigm is required. 

“With MCP enabling AI agents to dynamically chain together multiple tools and services, traditional end-to-end testing seems inadequate. What we need is autonomous workflow validation… Instead of validating a known path to success, we evaluate the AI agent on its basis for choosing the right tool for the right context.” 

Leaders ought to think beyond testing static outcomes. The focus must shift to evaluating the agent’s decision-making process, ensuring it not only arrives at a correct answer but does so using the appropriate and authorized methods. A concrete example of this in workflow governance comes from Chad Burnette, Founder and CTO of Wayfound, whose platform acts as a quality control layer for other agents. 

“An example is an agentic system that receives incoming emails… before the response is sent to the customer, the Wayfound MCP tool ‘evaluate_sessionʼ can check the work of the agent for any violations of guidelines specified… (things like tone of voice, bias, formatting issues, PII, etc.).” 

This provides a clear blueprint for building responsible AI. By implementing similar “AI guardrails” via MCP, organizations can create a final quality gate that verifies an agent’s output for compliance and safety before it ever reaches an end-user. 

—Unlocking Enterprise Data and Content 

MCP solves for the long-standing challenge of connecting AI to foundational enterprise data and content repositories. Bhojani of Dremio offers a powerful use case centered on their data lakehouse. 

“[MCP] allows an internal chatbot to answer business questions by generating trusted SQL over Iceberg tables.” 

The key here is abstraction. Engineering teams can create a trusted MCP server that handles the complexity of generating secure, accurate SQL, allowing the AI agent to interact with a simple tool without needing to understand the underlying database architecture. Facundo Giuliani, Solutions Engineering Team Manager at Storyblok, a headless CMS platform, explains how MCP helps manage structured content. 

“One of the key challenges weʼre solving with MCP is bridging the gap between structured content and generative AI outputs. In a headless CMS, content is highly modular and context-dependent. MCP helps us preserve that structure and context when using AI for content generation or enrichment.” 

For any organization with a mature content management system, this use case is critical. It shows that MCP can be used to grant AI-controlled access to valuable content repositories, allowing for AI-powered enrichment while ensuring the integrity of the underlying content model is never compromised. 

Navigating the Nuances of MCP Adoption 

As the use cases demonstrate, the potential of MCP is undeniable. However, the path to successful implementation is nuanced and requires more than just technical acumen. It requires architectural foresight and a pragmatic understanding of the ecosystem’s current limitations. Moving forward, leaders must navigate these challenges thoughtfully to build a durable foundation for their AI-native applications. 

A critical aspect of this foundation is the underlying infrastructure. Rodda suggests that a container-based development approach is essential for success, explaining that creating “the stable, scalable, and isolated environments required to run MCP servers effectively is a complex task. Using containerization from the outset simplifies deployment, ensures consistency, and provides the performance needed to handle the dynamic workloads that AI agents generate.” Without this, organizations risk building their AI future on unstable ground. 

Beyond the core infrastructure, success also depends on the skillful design of the MCP servers themselves. As Kruzel of Infactory points out, there is an art to crafting a toolset that an AI can use effectively. 

“Like most engineering endeavors, the better designed your MCP servers are and the better their documentation, the better they perform out of the box. There is also an art to defining the tools enabled by the MCP server. Too many tools, and MCP clients struggle to call the right tool. Too few tools and you are required to provide a lot of documentation and custom code for each tool call.” 

This provides a clear path for development teams. “Before exposing an entire API, start by curating a small, well-documented set of high-value tools and iteratively expand from there, monitoring how effectively AI agents can use them. It’s also vital to avoid common architectural mistakes,” suggests Agrawal of LambdaTest, who, speaking for engineering purview, has observed how developers make critical errors in their server design. “A friend of mine came to me for my tool… and he actually used his own OpenAI key, MCP server to get the context… Your MCP server should give the output of your APIs. It should give you context, not go and give to OpenAI, get the context from OpenAI, then again forward to OpenAI or any model.” 

This misstep highlights a crucial best practice. Ensure your MCP server’s role is strictly limited to providing context from your own systems; the interaction with the large language model should be initiated by the end-user or the agentic framework, not the server itself. 

The Future is AI-Native, and it Starts With Your Code 

Model Context Protocol is great for a new generation of agentic software capable of reasoning, acting, and automating work in ways that were previously unimaginable. Weʼve seen how it can streamline developer workflows, introduce powerful new forms of AI-driven quality control, and unlock enterprise data for intelligent systems. But we’ve also seen that the journey requires navigating significant architectural challenges, from the maturity of the ecosystem to the very design of the tools we offer to AI. 

Rodda gets the final word, bringing the conversation back to the most critical starting point for any organization. He asserts that while MCP provides the universal language for this new era, the real competitive advantage will not come from simply adopting the protocol. It will come from the ability to rapidly and reliably translate the immense value locked within existing source code, legacy systems, and undocumented APIs into the clean, structured context that AI agents can understand and act upon. 

The future of software is undoubtedly AI-native, Rodda highlights. And for developers and technical leaders, the defining challenge is to build the bridge that connects todayʼs enterprise reality to that future. The work begins not by chasing the latest model, but by looking inward and strategically preparing your current code and APIs to become active participants in the new agentic world. That is the foundation upon which the next wave of innovation will be built.