Glossary

Model Context Protocol (MCP)

Model Context Protocol (MCP) is an open standard created by Anthropic that defines how AI models connect to external tools, data sources, and services. It provides a universal interface so that any tool can work with any model, similar to how USB standardized device connections.

Model Context Protocol (MCP)

How It Works

Before MCP, every AI application built its own custom integrations. If you wanted your AI agent to read from Slack, query a database, and update Jira, you had to write separate integration code for each one. Every model provider had a different way of handling tool connections. This meant a lot of duplicated work.

MCP changes this by defining a standard protocol based on JSON-RPC 2.0 over either stdio or HTTP with Server-Sent Events. A tool provider implements the MCP server specification once, and any MCP-compatible AI application can use that tool. An AI application implements the MCP client specification once, and it can connect to any MCP-compatible tool. This is the same pattern that made the web work: standardize the protocol, and everything becomes interoperable.

The protocol covers three main capabilities: tools (functions the model can call), resources (data the model can read, addressed by URI), and prompts (templates for common interactions). A single MCP server might expose all three. For example, a CRM integration might provide tools for creating and updating records, resources for reading customer data, and prompts for common sales tasks.

MCP supports both local servers (running on the user's machine, useful for file system or local tool access) and remote servers (hosted services, useful for SaaS integrations). Authentication in remote MCP uses OAuth 2.1, and the transport layer supports streaming responses for long-running operations.

For enterprise AI development, MCP reduces integration cost and increases flexibility. You can swap AI models without rebuilding your tool integrations. You can add new tools without modifying your AI application's core logic. The ecosystem of pre-built MCP servers has grown quickly, covering GitHub, Slack, Google Drive, Postgres, BigQuery, Notion, and hundreds of other services.

Where MCP isn't the answer: one-off integrations where a direct API call is simpler, or latency-critical paths where the MCP round-trip overhead matters. MCP shines when you have many tools, multiple models, and you want the integration to outlive any one of them. MCP adoption accelerated through 2025 with first-party support from Anthropic, OpenAI, Google, and most major AI dev tools. For teams building AI agents, designing around MCP from the start means integrations stay portable and future-proof.

In Practice

The MCP ecosystem includes official SDKs in TypeScript, Python, Kotlin, Swift, and Go. Anthropic maintains a registry of reference MCP servers (filesystem, GitHub, Postgres, Slack, Sentry, Puppeteer). Third-party MCP servers cover Salesforce, HubSpot, Linear, Notion, Stripe, and many more. Claude Desktop, Claude Code, Cursor, and VS Code's AI assistants all support MCP clients out of the box.

Typical configuration: servers declare tools with JSON Schema for input arguments, optional progress tokens for long operations, and typed resource URIs. Clients establish a session, call tools/list to discover available tools, and invoke tools with arguments. Remote MCP over HTTP+SSE uses OAuth 2.1 for auth with dynamic client registration per RFC 7591. Latency budgets: local stdio servers respond in under 50ms, remote servers typically 100-500ms round-trip.

A working integration pattern. Expose your internal API as an MCP server using the TypeScript SDK. Define each tool with a Zod schema for arguments, a handler function, and a human-readable description that the LLM uses to decide when to call it. Deploy the server (stdio for local dev, HTTP for production). Configure your agent (Claude Agent SDK, or any MCP client) to connect. The agent now has access to your tools without any model-specific integration code. If you swap Claude for GPT later, the same MCP server works without changes.

Worked Example

A mid-sized logistics company wants Claude to help their operations team answer customer questions about shipments. Before MCP, they would have built a custom tool integration coupled to Anthropic's tool-use API. When they later wanted to try GPT-4o for cost reasons, they'd have to rewrite the integration against OpenAI's function calling format.

With MCP, they build one MCP server using the TypeScript SDK. It exposes three tools: track-shipment (by tracking number or PO), get-customer-contract (pulls service-level terms from Salesforce), and estimate-delivery (calls an internal ML model). Each tool is defined with a Zod schema for arguments and a short description. The server is deployed behind their existing auth gateway as a remote MCP server over HTTP+SSE with OAuth 2.1.

Claude Sonnet running in their agent framework connects to the MCP server at startup via the Claude Agent SDK. When a customer emails asking "where is my shipment PO-48291?", the agent calls track-shipment with the PO, then estimate-delivery with the result, then composes a reply. Three months later they A/B test GPT-4o using the same MCP server. The agent framework swaps providers with a config change. Zero tool code is rewritten. Cost comparison runs in a day instead of a week. The operations lead says the cleanest architectural decision they made that year was wrapping their internal APIs behind MCP from the start.

What People Get Wrong

Myth

MCP is just Anthropic's version of function calling.

Reality

Function calling is a per-model API contract (each provider has their own format). MCP is a transport-level protocol with versioning, discovery, streaming, and auth. A function-calling integration binds you to one model provider's API surface. An MCP integration survives model swaps and works across Anthropic, OpenAI, Google, and open models that support MCP clients. It's a different layer of the stack.

Myth

You should expose every internal API as an MCP server.

Reality

MCP is useful when a tool will be called by AI models. It's overhead when the consumer is deterministic application code. Exposing every internal API as MCP adds maintenance burden without benefit. A better pattern: keep internal APIs as they are, and build a dedicated MCP server layer that wraps the subset of functionality intended for AI use, often with additional safety checks like argument validation and rate limits.

Myth

MCP solves AI security concerns with external tool access.

Reality

MCP standardizes the connection protocol. It doesn't automatically solve prompt injection, tool misuse, or data exfiltration. Every MCP tool you expose is a potential vector for an agent to be tricked into misusing it. Production MCP deployments still need guardrails: argument validation, action-level auth, allow-lists, and human approval for sensitive operations. MCP is plumbing, not a security layer.

Related Solutions

AI Agent DevelopmentView →
Enterprise AI IntegrationView →
Agentic AutomationView →

Need help implementing this?

We build production AI systems for enterprises. Tell us what you are working on and we will scope it in 30 minutes.