Apollo GraphQL today added a Model Context Protocol (MCP) server to its portfolio to make it simpler to integrate artificial intelligence (AI) agents and application programming interfaces (APIs) based on GraphQL.

This latest addition to the toolkits provided by the company makes it possible to integrate any application that supports GraphQL APIs with an artificial intelligence (AI) agent that supports MCP, says Apollo GraphQL CTO Matt DeBergalis.

AI Unleashed 2025

Originally developed by Anthropic, MCP is based on a client-server architecture that makes use of JSON-based remote procedure calls (RPCs) to enable AI agents to invoke functions, fetch data and use predefined prompts. That capability eliminates the need to build connectors for each AI agent that an organization might deploy.

GraphQL is an open-source query language for API that provides a complete description of the data being exposed. That capability allows any client accessing that API to request exactly the data they need to provide a more efficient alternative to REST APIs.

REST APIs are not going away any time soon, but given the use cases for AI agents, the need for declarative GraphQL APIs that provide more granular control over what data is accessed will become more pronounced as thousands of AI agents are deployed across the enterprise, says DeBergalis.

Getting data from APIs into the right shape for AI consumption involves complex challenges around discovering, sequencing, security gating, retrying, parallelizing, caching, and numerous other functional and non-functional requirements that are ideally suited for GraphQL APIs, he adds. In fact, that capability is critical for ensuring execution patterns work at scale exactly the same way every time, notes DeBergalis.

Additionally, IT organizations can enforce policies to ensure interactions are limited to pre-approved operations and surface areas. Anyone with access to the graph can wire up MCP tools with the appropriate level of governance. That capability will help reduce costs as the number of queries made by AI agents starts to generate thousands of tokens, says DeBergalis.

Finally, the large language models (LLMs) that drive AI agents will be able to reason about the meaning of the graph and its objects to surface additional insights, he adds.

The number of IT teams that will be encountering API challenges as they look to build and deploy AI agents is already starting to exponentially increase. “Every customer we talk to has made agentic AI priority one,” says DeBergalis.

The challenge, as always, with any emerging technology is finding the more efficient means for integrating into legacy IT environments. In the case of AI agents, that’s especially critical because all the data AI agents need to access resides in those applications.

It’s not clear to what degree the rise of AI agents might spur further adoption of GraphQL, but as AI agents proliferate throughout the enterprise, the need to provide them with some efficient means of accessing massive amounts of data is about to soon become a pressing issue for everyone involved.