
AI models operate in a vacuum, often disconnected from the real-time data, private codebases, and specialized tools that run modern businesses. And this can become the last-mile problem for AI, acting as the primary barrier to unlocking its true enterprise value.
Without access to specific, relevant context, even the most advanced model canʼt do much more than hold a generic conversation. Before an AI agent can use a tool, that tool needs to be described in a way the model can understand, and this is where many organizations hit their first major roadblock.
Model Context Protocol (MCP) solves for this by functioning as a kind of USB port for AI, allowing any model to securely plug into practically any tool or data source. Building the MCP bridge requires a high-quality, machine-readable blueprint, typically an OpenAPI specification. But in the real world, these blueprints are often missing, outdated, or poorly written, creating a “specification gap” that stops AI initiatives before they can even begin.
“The conversations we’re having with leaders across the industry all point to the same hurdle in making their existing systems AI-ready, securely and efficiently,” says Steve Rodda, CEO of Ambassador, an API development company. “The reality is that most innovation is built on years of existing code, and asking every team to stop and write perfect specifications from scratch is a non-starter. This is the ‘garbage in, garbage out’ problem for the AI era; a model’s effectiveness is completely dependent on the quality of the tool descriptions it’s given.”
Notable engineering leaders on the front lines of its adoption—spanning API gateways, enterprise systems, data infrastructure, and cloud security—share insights on how they are using MCP servers.
From Developer Productivity to Business Enablement
According to Rodda, the most critical first step in any AI strategy is to assess and improve the machine-readability of the tools you want the AI to use. He explains that Ambassador is tackling this directly with a ‘code-firstʼ approach in its Blackbird platform, which helps close the specification gap, de-risk AI adoption, and finally unlock the value of legacy systems.
Rodda elaborates, “Instead of requiring developers to have perfect specs ready, the platform meets them where they are by connecting directly to their code repositories and scanning the existing services to automatically generate the high-quality OpenAPI specs needed to build a robust and reliable MCP server.” This helps close the specification gap, de-risk AI adoption, and finally unlock the value of legacy systems that would otherwise be left behind.
With a solid foundation in place for generating reliable MCP servers, leaders across the industry are now applying this capability to solve stubborn, domain-specific challenges and accelerate their workflows. When it comes to developer productivity, the initial results show a dramatic impact on complex configuration and onboarding processes, directly improving the DevOps lifecycle.
For Sai Krishna, Director of Engineering at LambdaTest, MCP was the key to eliminating a significant source of friction for his customers. “We developed a HyperExecute MCP server that acts as an intelligent intermediary,” Krishna shares. “It scans the customer’s project to understand their framework, language, and current test commands. With this context, it automatically generates the necessary HyperExecute YAML configuration in seconds. That entire two-week onboarding process is now reduced to less than an hour.”
This demonstrates that some of MCP’s most immediate value lies in automating complex configurations that have historically drained engineering time, and Krishna sees this evolving into intelligent debugging, where MCP servers can analyze failure logs and suggest fixes on the fly.
Teams should identify their most time-consuming setup processes as prime candidates for this kind of AI-driven automation.
Krishna also highlights that as MCP drives more Agentic AI development, DevOps teams must adopt a new mindset. “When an application uses a large language model, you can’t ‘test’ it in the traditional, deterministic sense,” he says. “The term testing no longer holds good; this is where the term ‘evaluation’ comes into play.” This critical distinction advises engineering leaders to adapt their quality assurance strategies from simple pass/fail tests to continuous, large-scale evaluation frameworks to get the fast feedback they need.
But the impact of MCP extends far beyond internal developer tools, reaching directly to non-technical business users. At HubSpot, the focus has been on democratizing data analysis for their customers, empowering them to find answers without needing to rely on a data specialist.
Karen Ng, SVP of Product and Partnerships at HubSpot, describes the self-service power their MCP-driven ChatGPT connector has unlocked for marketing and revenue teams.
“Weʼve had beta users tell us theyʼd been waiting on RevOps for a question they were able to resolve themselves with the connector,” Ng shares. “Itʼs helping our customers identify risks and opportunities to grow while being accessible enough for non-technical users to leverage their own data.”
This highlights a critical strategic opportunity for business leaders to think beyond internal efficiency gains. By using MCP to embed AI capabilities directly into customer-facing workflows, organizations can empower their business teams to become more self-sufficient and data-driven.
And this kind of empowerment continues into the data infrastructure domain, where organizations are increasingly using MCP to abstract away the complexity of traditional query languages. “Our goal is to create a more intuitive bridge between users and their own complex datasets,” says Weimo Liu, co-founder and CEO of PuppyGraph. “MCP enables us to answer data-driven questions more effectively, helping users gain insights from their own data in a more natural, conversational way.”
This points to a powerful application for data teams: using MCP to create a conversational front-end for complex databases. The recommendation is to identify high-value datasets and use this approach to make them accessible to a broader audience of business analysts and decision-makers who are not experts in SQL or other query languages.
A New Economy of AI-Powered Tools
While these specific applications demonstrate clear productivity gains, they also point toward a much larger strategic shift. The true power of a standardized protocol like MCP lies not just in solving individual problems but in creating an entirely new economy for how AI capabilities are developed, shared, and consumed. This begins with the democratization of AI development itself, a concept that Greg Jennings, VP of Engineering for AI at Anaconda, sees as MCP’s most profound impact.
“MCP has standardized connections, allowing any model to ‘plug into’ tools and data sources with minimal custom work, which is dramatically lowering barriers to experimentation,” Jennings says. “Developers no longer need to wait for large companies to prioritize specific use cases. Anyone with basic programming skills can create an integration that allows AI models to connect to their favorite tools or data sources.”
Leaders should, therefore, treat MCP as more than just an integration project; it is a platform for fostering widespread innovation. By lowering the barrier to entry, organizations can empower their own domain experts to build valuable AI-powered tools that central teams might never have anticipated.
This democratization naturally leads to the next logical step: monetization. As more developers begin building specialized tools, a new marketplace of AI capabilities emerges. Michael Pytel, Lead Technologist at VASS, sees this as a significant opportunity for businesses to create new revenue streams from their unique expertise, particularly in the enterprise systems space.
“MCP creates the potential for a new ecosystem where partners and developers can build and monetize specialized MCP servers,” Pytel explains. “For instance, a third-party could offer a highly specific, AI-accessible logistics calculation tool that any enterprise agent could use, creating new revenue paths—an MCP for Freight Rate Shopping with SAP integration is a perfect example.”
Forward-thinking businesses should therefore not only look at consuming these third-party AI services but also evaluate their own core competencies. If you have proprietary data or a valuable business process, MCP provides a clear pathway to package that asset into a monetizable, AI-accessible tool for a wider market.
Navigating the Challenges of Production MCP
However, this new ecosystem of AI-powered tools and monetizable services also introduces a new class of enterprise-grade challenges. With this power for an AI to act on a user’s behalf comes the critical responsibility of ensuring security, control, and high-reliability. Among these, a primary hurdle is managing governance and identity, especially within core enterprise systems where the stakes are very high.
“This is a primary focus for us at VASS,” says Pytel, emphasizing that an AI agent is an extension of its human user. “When an AI agent performs an action, it’s doing so on behalf of a human. It’s critical that MCP servers are built to validate the user’s identity and enforce their specific permissions for every request, preventing the agent from becoming a security vulnerability.”
This means that before deploying AI agents that can alter data or systems, leaders must extend their existing identity and access management policies to account for them. The core question must shift from, ‘What is this user allowed to do?ʼ to, ‘What is this user allowed to ask an AI to do on their behalf?ʼ
Alongside governance, securing the MCP server itself is a foundational concern. Krishna of LambdaTest warns that new endpoints can introduce new attack vectors. “Prompt injection attacks are an escalating risk,” he says. “Thatʼs why we run our MCP servers remotely, so customer data and environment files never leave their local system.”
Krishna advocates for a security-first approach in MCP architecture, focusing on patterns that protect user data and leveraging open-source tools to collectively harden the ecosystem. To help the entire community, his team open-sourced a tool called Secure Hulk, which can scan any MCP server for common vulnerabilities.
Beyond these security measures, establishing trust with users requires a level of consistency that current AI models can struggle to provide. Liu of PuppyGraph points to the fundamental challenge of reliability when AI is used for data analysis. “One key challenge is generating deterministic and reproducible results,” Liu highlights, “For example, when users ask, ‘Who collaborates most with Will Smith?ʼ, MCP typically returns the correct answer. However, there is still a 1%–5% chance it incorrectly returns Will Smith himself, which undermines reliability.”
This challenge of non-determinism is a major hurdle to enterprise adoption, as even a small error rate can erode confidence in an entire data platform. It has led successful data infrastructure companies like PuppyGraph to a clear strategic conclusion: focus on building highly curated, domain-specific agents rather than unreliable, general-purpose chatbots. Businesses should, therefore, prioritize reliability within a narrow scope to build a foundation of user trust before attempting to broaden an AI’s capabilities.
The Road Ahead
The path from a promising protocol to a production-ready ecosystem shows that Model Context Protocol is doing more than just connecting applications. It is fundamentally reshaping how businesses build with AI, one secure and context-aware tool at a time.
“This journey begins with solving the foundational ‘specification gap’ to ensure developers can build on a solid, AI-ready footing,” Rodda reminds. “From there, it expands to tangible applications that accelerate DevOps workflows, empower non-technical users, and even create new monetizable marketplaces for specialized expertise.”
And now, the conversation is maturing to address the critical next frontier of challenges: governing a decentralized ecosystem, enforcing user identity, and building the highly-available systems required for enterprise adoption.
“While the progress is clear, many leaders believe the community is still in the very early stages of this transformation,” Rodda underscores. “The work being done today to solve these complex issues of security and trust is what makes the next generation of AI possible. The engineers building on MCP are not just integrating systems; they are laying the groundwork for a future where business, data, and artificial intelligence operate as a single, cohesive intelligence.”