BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Apollo GraphQL Launches MCP Server: a New Gateway Between AI Agents and Enterprise APIs

Apollo GraphQL Launches MCP Server: a New Gateway Between AI Agents and Enterprise APIs

Listen to this article -  0:00

Apollo GraphQL recently launched its MCP Server, enabling businesses to securely and efficiently integrate AI agents with existing APIs using GraphQL. The platform empowers teams to scale innovation and drive faster time-to-value from AI investments by reducing development overhead, improving governance, and accelerating AI feature delivery.

At the core of this offering is the Model Context Protocol (MCP), which establishes a standardized interface between large language models (LLMs) like ChatGPT or Claude and enterprise systems. The Apollo GraphQL MCP Server leverages GraphQL to create a declarative, scalable layer that connects AI agents with backend APIs, allowing them to perform tasks such as querying data, executing business logic, or orchestrating multi-step operations. Matt DeBergalis, Apollo GraphQL's CTO and co-founder, explains:

APIs are the entry point to capabilities: adding items to carts, checking order status, scheduling appointments, updating inventory. When AI can reliably interact with these systems, we unlock an entirely more capable kind of software where natural language becomes the interface to complex business operations. MCP makes this possible by providing the connective tissue between AI's language understanding and your API infrastructure.


Apollo GraphQL MCP Server orchestrates access by AI Models to existing APIs (source)

GraphQL is a good candidate for an abstraction layer for orchestrating complex, policy-aware, multi-API workflows. It enables deterministic execution, selective data retrieval, and embedded policy enforcement—core requirements for AI systems that interact with multiple backend APIs and must deliver consistent and secure results.

The integration of GraphQL with MCP also helps manage token usage and response precision. By enabling LLMs to request only the necessary fields, the system minimizes overhead and reduces the risk of irrelevant or misleading outputs—critical for performance and user trust.

GraphQL's declarative nature and self-documenting schemas empower AI agents to reason over APIs effectively. LLMs can explore schema metadata and dynamically form queries with minimal manual tool configuration, supporting more flexible AI interfaces and adaptive behaviors.

MCP tools can either be predefined—using static GraphQL operations—or generated dynamically through schema introspection. Static tools offer predictability; introspection allows more open-ended exploration by AI, including progressive schema learning and on-demand query execution.

Crucially, teams can integrate existing REST APIs via Apollo Connectors without rebuilding or migrating services. This means businesses can adopt AI interfaces with minimal disruption, using the MCP Server to layer intelligence on top of their current architecture.

As a protocol, MCP is quickly emerging as a foundational building block in the AI developer stack. It standardizes how language models access external tools and data, much like HTTP did for the web. With its growing importance, major players such as HashiCorp, GitHub, and Docker have also begun rolling out their MCP-compatible offerings—further signaling that tool-aware AI is on track to become the norm, not the exception.

Apollo GraphQL released its MCP Server as a source-available software on GitHub under the Elastic License 2.0.

InfoQ spoke about the announcement with Matt DeBergalis, CTO and co-founder of Apollo GraphQL.

InfoQ: Can you elaborate on the governance model for exposing MCP tools in a multi-team or enterprise environment? How do you prevent unintended exposure or misuse of sensitive capabilities?

Matt DeBergalis: The governance challenge is real—teams need to ship MCP tools quickly to keep pace with AI, but they also need confidence these tools won't expose sensitive data or violate policies. Our approach is to implement MCP tools declaratively as part of an orchestration layer that abstracts APIs and services. When you define an MCP tool as a GraphQL query, you declare what data the AI needs and let the orchestration layer handle how to get it—including all the messy details of authentication, filtering, and policy enforcement across multiple systems.

This declarative approach is what makes governance scalable. A query that spans customer and shipping systems can enforce complex rules like "only show tracking details to loyalty members." The orchestration layer provides deterministic execution, so the same tool always behaves the same way. Teams get the agility to experiment and ship fast without mortgaging the future. 

InfoQ: What specific design patterns or architectural anti-patterns have you seen in early adopters of AI-API orchestration that GraphQL and MCP are helping to resolve?

DeBergalis: The anti-pattern is to add MCP to each API, forgoing the abstraction layer. This leads to challenges: non-deterministic execution, inefficient token usage when APIs return full objects, no good way to enforce cross-system policies, and brittle implementations that are quick to write but hard to update as the APIs, models, and even MCP itself evolve.

The solution is an orchestration layer, and GraphQL is perfect for this. It provides deterministic execution through query plans, returns only needed fields, enforces policies declaratively, and lets you iterate fast. GraphQL and MCP are a natural fit—GraphQL's declarative queries become MCP tools, giving AI a clean interface while the orchestration layer handles the complexity. This is how the open-standard AI stack comes together.

InfoQ: What observability or debugging tools does the Apollo MCP Server provide to help developers trace AI-generated queries and orchestrated API flows?

DeBergalis: The key is the query plan—when AI calls an MCP tool, you can see the exact deterministic execution path across your services. The declarative GraphQL query shows which APIs are called, in what order, and with what parameters. This isn't a black box; it's transparent orchestration you can trace, debug, and optimize.

Because GraphQL is declarative, you can analyze AI behavior patterns and improve your MCP tools based on real usage. The same observability that works for human developers works for AI-generated queries. This transparency is fundamental to the AI stack—you need to trust and verify what's happening under the hood.

InfoQ: What role do you see Apollo Federation playing in a multi-domain MCP deployment, where AI must reason across bounded contexts owned by separate teams?

DeBergalis: The graph becomes your unified semantic layer for AI. When teams own different domains—customers, inventory, fulfillment—the graph presents one coherent model that AI can traverse naturally. This is the power of GraphQL as an open standard: teams maintain autonomy while contributing to a shared semantic understanding.

The declarative nature of the graph means AI capabilities can evolve rapidly without breaking. Each team can update their domain, and the AI automatically gets the new capabilities through the graph. This combination of stability and agility is what the AI stack needs—open standards that let independent teams move fast while maintaining a coherent whole.

About the Author

BT