Model Context Protocol: Unlocking Interoperability Across AI Ecosystems With MCP

Jeffrey Wong headshot
7 min read
Summarize this with AI

Think back to your team’s first-generation AI agent. It likely involved a significant engineering effort to create a bespoke, point-to-point integration directly with a single key system like Salesforce or Dropbox. The result was a powerful, purpose-built tool that took significant time and effort to plan, design, test, and deploy. 

Now, fast forward to today. Your team just launched a new agent using the Model Context Protocol (MCP). The development was even quicker by building to a standardized protocol instead of creating new custom connections to your enterprise applications like Salesforce, Jira, Slack and others. The result was a reduced time to production deployment, a developer’s dream.

This agility is a game-changer, but like any powerful technology, it comes with trade-offs. This post will peek under the covers to explain exactly what MCP is doing to simplify integration, use a relatable analogy to clarify its function, and explore the infrastructure costs and performance considerations that arise. 

Ultimately, we’ll provide real recommendations for harnessing the full power of MCP without letting its hidden costs undermine your overall success.

What does MCP do?

Let’s understand what is powering this agility by looking under the hood and remembering what MCP is doing. For all intents and purposes, MCP does the following:

  • Standardizes an interface and environment for AI models
  • The LLM is focused on data communication, tool selection, and tool usage
  • Has modularity and interoperability for workflow/pipeline design (a “plug-and-play” ecosystem)
  • Provides developer abstraction and simplicity to accelerate development efforts

The idea is to simplify the process of building integrations with LLM and make it easier to consume data and resources, while also simplifying business logic and rules using natural language. This means that a general-purpose LLM must now be dynamically loaded with data, expectations, and constraints to make decisions. Information must be loaded for each task for context.

Fun Fact: There are over 16K publicly available MCP servers available already. 

MCP: the AI model’s concierge

When explaining MCP, I like to use the real-world analogy of a hotel concierge. Imagine you’ve just checked into a large hotel in a foreign city. You approach the concierge desk and ask for help planning a romantic evening of dinner and a theater show with your spouse. 

The concierge doesn’t personally cook the food, own the theater, or drive you there. Instead, they use their network of trusted local services to make calls, check systems, and gather information, without involving the guest. All with the intent of finding the right restaurants, getting the correct theater show and seats, and figuring out if a taxi, rideshare, or limo will fulfill your request. 

They understand your preferences (“I prefer traditional French, romantic places with music”), apply access rules (“this restaurant only takes reservations through a certain system”), and return a curated answer: a dinner reservation, show tickets, and a town car for transportation. You never need to figure out how each business works or worry about the finer details. The concierge just handles the complexity behind the scenes.

MCP works the same for AI models. When an LLM receives a request like “summarize the company’s latest sales numbers,” MCP fetches the right data from approved sources (e.g.,  databases, APIs, or cloud drives), navigates access controls, and delivers the relevant information in a consistent format. 

It hides the complexity of connecting to multiple systems, ensuring the model always has the right context, is secure, and does it on demand. Just like a concierge ensuring guests get what they need effortlessly.

The hidden costs of agility

However, this efficiency has trade-offs. Similar to the hotel concierge, an initial conversation is necessary to define the AI model’s needs, restrictions, and preferences. This often requires a conversational back-and-forth, where initial recommendations lead to feedback, and the query is refined. This can introduce latency in reaching the final, satisfactory answer.

The key to success is about understanding the tradeoffs involved and then optimizing the solution for your particular needs.  MCP is helping to solve engineering integration challenges, but in doing so, it introduces new infrastructure costs and performance considerations. 

Let’s explore why these agents can be resource-intensive and provide real recommendations for harnessing the power of MCP without letting its hidden costs undermine your overall success.

What MCP does for the business

The primary purpose of MCP is to reduce design complexity, decrease developer time, and enhance engineering scalability. 

From a business perspective, it unlocks data silos and connects applications dynamically, enabling clearer visibility of data patterns, faster decision-making, and dynamic real-time operations.

The world before MCP

To understand the MCP’s impact, consider the classic integration nightmare. Imagine your team evaluating 3 different LLMs (from Google, OpenAI, and Anthropic) and needing to connect them to 4 internal tools (Jira, Salesforce, Confluence, and a product database). 

This is an “N x M” problem that requires building and maintaining 3 x 4 = 12 unique, point-to-point integrations. Adding a new tool means your engineers must write code or create new pipelines to connect it to all three models, resulting in significant time spent on unique “glue logic.”

The challenge extends to managing changes. For instance, if an integration uses Dropbox for cloud file storage (connected via direct API or prebuilt SnapLogic Snaps (the quickest method to abstract API complexity) a later strategic decision to consolidate to OneDrive requires extensive adjustments to account for differences in access control and functionality across all existing integrations.

The world with MCP

With MCP, the team implements one standard “MCP Server” to manage file operations for all desired cloud storage systems, or creates a single multi-cloud file storage MCP server. The core advantage is an unchanged, single interface for the LLM’s interaction with the file system, eliminating the need to modify “glue code.” This substantially simplifies the process for agents and developers when swapping different LLMs or integrating new tools.

This reduction in engineering friction is the key benefit. Before MCP, the sheer complexity of integration made teams highly selective about connections, requiring manual tool specification and detailed mapping of API functionality to LLM capabilities. 

MCP lowers the friction and effort involved, accelerating the solution’s time-to-market and speeding up deployment to customers.

Dynamic capabilities drive agility

To better understand why this works, let’s consider the interaction with MCP servers. After establishing connectivity, the MCP server’s capabilities are dynamically advertised to the MCP client endpoint. 

This auto-discovery ability is one of the main advantages: developers don’t need to manually map or define a tool’s capability each time, as the LLM dynamically decides which ones to leverage. It’s like the hotel concierge who knows to skip irrelevant packages and focus solely on show tickets. MCP simplifies the creation of complex agents by making it easier to build a multi-tool ecosystem.

Smart enterprises know that every integration matters and agility is key. There are two paths to realizing MCP’s value:

  1. Leveraging commercially available MCP servers for interfacing with off-the-shelf applications (SaaS and on-prem).
  2. Creating custom MCP servers to expose proprietary business processes, data pipelines, purpose-built applications, reporting, and business-specific data insights, which unlocks the real competitive advantage.

Smarter data connections 

A holistic iPaaS like SnapLogic, which manages data, applications, and API management while remaining LLM-agnostic, facilitates best-of-breed choices.

Prebuilt connectors and pipeline patterns accelerate the creation of new pipelines, reducing effort and kickstarting the development of agents and automations. These custom automations and data pipelines can then be exposed as MCP servers for AI workflows and agents. 

The increase in the speed of design, deployment, and iteration leads to faster business decisions and better access to business data.

These efficiency gains unlock value often unattainable today. When deciding where to build your first MCP server, focus on these key business areas:

  • Time and resource-intensive areas where connecting data points is difficult
  • Data capabilities that directly drive business decision-making
  • Processes and procedures frequently stalled while waiting for data
  • Areas where the lack of information slows people down (e.g., “If I only had this information, I could do that faster or better”)

When executed correctly, investing in these “difficult to connect” areas can yield a significant business multiplier, improving customer experience, increasing revenue, and enabling personalized market offerings.

Food for thought: FAQ

Yes, with the SnapLogic integration platform’s MCP server capability, any pipeline can be converted into a custom MCP server. This simplifies, expands, and enables reuse of the custom capabilities you have already built.

User security and access control around authentication and authorization with MCP servers can be handled via two different mechanisms:

  1. Bearer token: where a single access token is used for all requests. Typically used for basic access controls.
  2. OAuth 2.0: each user is individually authenticated and authorized with a unique access session-based token to allow for refined tracking and access controls.

Yes, scalability and usage should be planned for and sized correctly. This is no different than what happens with API access.  

This depends on the purpose and design of your MCP server, but it is not uncommon for multiple SnapLogic pipelines to map to a single MCP server. Individual MCP functions might map to different pipelines.

Harnessing agent power, responsibly

MCP is a groundbreaking protocol that simplifies the creation of powerful agents capable of delivering real business impact. The key to success lies in proper design and an understanding of the underlying technology. 

By implementing proper monitoring, guardrails, and real-time controls, teams can harness the power of MCP-enabled agents safely, sustainably, and profitably. With a clear grasp of the technology, its use cases, and the choice of an agnostic and flexible agent-building solution, your business needs will effectively dictate the right approach. 

Take a tour of the SnapLogic integration platform to learn more.

Jeffrey Wong headshot
Director of Technical Product Marketing at SnapLogic
Category: AI