The Model Context Protocol (MCP) allows tools, data resources, and prompts to be published by an MCP server in a way that Large Language Models (LLMs) can understand. This empowers LLMs to autonomously interact with these resources via an MCP client, expanding their capabilities to perform actions, retrieve information, and execute complex workflows.
What does MCP support?
- Tools: Functions an LLM can invoke (e.g., data lookups, operational tasks)
- Resources: File-like data an LLM can read (e.g., API responses, file contents)
- Prompts: Pre-written templates to guide LLM interaction with the server
- Sampling (not widely used): Allows client-hosted LLMs to be used by remote MCP servers
An MCP client can, therefore, request to list available tools, call specific tools, list resources, or read resource content from a server.
How does MCP handle transport and authentication?
MCP protocol offers flexible transport options, including STDIO or HTTP (SSE or Streamable-HTTP) for local deployments, and HTTP (SSE or Streamable-HTTP) for remote deployments.
While the protocol proposes OAuth 2.1 for authentication, an MCP server can also use custom headers for security.
What is the role of an MCP Server?
MCP Servers are lightweight programs that expose specific capabilities through the standardized Model Context Protocol. These servers make the tools that can be utilized by AI agents available.
Servers can offer Local Data Sources, such as files, databases, and services that MCP servers can access locally, or Remote Services, which are external systems available over the internet (e.g., through APIs) that MCP servers can connect to.
What is the role of an MCP Client?
MCP Clients access the capabilities exposed by MCP Servers. Clients are used by MCP Hosts (AI models) to request data or actions from MCP Clients, which forward the AI model’s requests to MCP Servers, which in turn provide access to the requested data or capabilities.
How does MCP relate to building AI Agents?
MCP has the potential to make it easier for AI agents to take advantage of a growing list of pre-built MCP integrations, simplifying access to the capabilities of specific external tools. As the MCP ecosystem grows, more and more of these integrations will be created, and all of them will be easily accessible to AI agents developed with various tools.
Many initial adopters of MCP used it to develop integrations with their own custom-developed tooling, so MCP support can potentially make integration with such home-grown solutions easier for users.
How does MCP relate to API Management?
API management platforms serve as central points for the discovery and control of application functionality. This mission aligns well with MCP-defined services. On the provider side, API management solutions can facilitate the inclusion of MCP support in the information exposed about each API, immediately opening it up for use by AI agents, whether those agents are created with specific agent development tools or with third-party solutions.
Meanwhile, consumers of MCP tools will still need mechanisms to discover what tools are available, and managers of those tools will need to be able to control access, versioning, and performance.
Despite some claims that MCP will make APIs obsolete, APIs will remain part of the enterprise IT ecosystem for the foreseeable future, at the very least as a bridge for legacy systems, because MCP primarily abstracts the underlying API implementation complexity, rather than removing it entirely.
API management platforms enable users to create and manage tools that work with both MCP and existing API-based systems, including custom-developed ones, spanning both existing approaches and new emerging standards.
How does MCP relate to A2A?
A2A (Agent-to-Agent Protocol) aims to standardize communication and collaboration between agents. Through a client-server model, it allows AI to securely and dynamically connect to external resources (databases, APIs, file systems, etc.). It defines how agents discover each other (Agent Card), exchange information, and coordinate actions.
In contrast, MCP focuses on standardizing how applications provide context (data and tools) to LLMs. In other words, where MCP focuses on managing the “vertical” connection between agents and tools, A2A aims to facilitate “horizontal” communication between agents.
For now, A2A is positioned as a supplement to MCP, premised on a clear distinction between “agent” and “tool.” However, this boundary is increasingly blurred. Tools are becoming smarter, tending towards “agentification”; agents also heavily rely on tools. As the A2A standard is adopted across the industry, its incorporation into inter-agent communication frameworks will be explored.
What are the enterprise benefits of MCP?
By adopting MCP, organizations can gain several advantages:
- Pre-built integrations: Offers a growing list of integrations that Large Language Models (LLMs) can directly plug into
- Flexibility in LLM providers: Allows organizations to easily switch between different LLM providers and vendors
- Enhanced data security: Provides best practices for securing data within an organization’s existing infrastructure
- Simplified AI agent integration: Designed to help organizations integrate AI agents with their existing software tools
- Easier access to external tools: Facilitates access for AI agents to the capabilities of specific external tools as the MCP ecosystem expands
- Improved integration with custom tooling: Potentially simplifies integration with home-grown solutions for users