Heads up: posts on this site are drafted by Claude and fact-checked by Codex. Both can still get things wrong — read with care and verify anything load-bearing before relying on it.
why how

Why does MCP exist?

Every AI app was reinventing the same plumbing to talk to the same tools. MCP is the standard that turns an M×N integration mess into M+N.

AI & ML intro Apr 29, 2026

Why it exists

By late 2024, every team building an AI assistant had hit the same wall. Their agent could write code, summarize text, reason about a problem — but the moment it needed to do something in the real world (read a file, query a database, search a wiki, open a Jira ticket) someone had to write glue. And the next assistant from the next team had to write that glue all over again, in a slightly different shape, against the same Jira.

This is the classic M×N integration problem. With M AI applications (Claude Desktop, Cursor, your internal copilot, the chatbot in your IDE) and N things they want to talk to (filesystem, Git, Postgres, Slack, Google Drive, your company’s wiki), the naive world needs M × N bespoke integrations. Each one is a small, boring, slightly different piece of plumbing. None of it is the interesting part of building an agent, and all of it has to exist before the agent is useful.

The Model Context Protocol exists to collapse that M × N into M + N. Write one MCP server for Postgres, and every MCP-aware client gets Postgres for free. Write one MCP-aware client, and it inherits the entire ecosystem of servers other people have already shipped. The interesting work — the agent itself — is no longer gated on writing the same five adapters everyone else just wrote.

If that pattern sounds familiar, it should: it’s the same move LSP made for editors and language tooling a decade earlier. Before LSP, every editor had to ship a custom integration for every language. After LSP, any editor that spoke the protocol got every language that spoke it. MCP is LSP’s idea applied to AI assistants and the world of tools they want to reach.

Why it matters now

Agents are only as useful as the surface area they can touch. A model that can reason brilliantly about your codebase but can’t read your codebase is a parlor trick. So the practical question for anyone shipping an AI product in 2026 isn’t “is the model smart enough?” — it’s “what can the agent actually reach, and how painful was it to wire up?”

MCP changes the answer to that second question. A few things follow:

For software engineers specifically, this means the question shifts from “how do I bolt my agent onto Jira?” to “is there an MCP server for Jira, and if not, how do I write a small one?” That’s a much smaller question.

The short answer

MCP = JSON-RPC + a small vocabulary (tools, resources, prompts) + a client/server split

MCP is an open protocol, built on top of JSON-RPC 2.0, that lets an AI application (the client / host) talk to external capabilities (each one exposed by an MCP server) through a tiny shared vocabulary. Servers offer three kinds of things: tools the model can call, resources it can read, and prompts the user can invoke. Any client that speaks MCP can use any server that speaks MCP. That’s the whole idea.

How it works

There are three roles in an MCP setup, and it helps to keep them straight:

When the host starts up, each client connects to its server, performs a short handshake, and asks: what do you offer? The server replies with a list of tools (functions the model can call, with JSON schemas for their arguments), resources (readable things addressable by URI — a file, a row, a doc), and prompts (named templates the user can trigger, like a slash-command).

From then on, the loop is what you’d expect:

user: "summarize the latest PR description and file an issue"

host →  asks model what to do, sends along the list of available tools
model → tool_call: github.get_pull_request(repo=…, number=…)
host →  forwards to the GitHub MCP server, gets back PR data
model → tool_call: jira.create_issue(title=…, body=…)
host →  forwards to the Jira MCP server, gets back the new issue URL
model → "Filed JIRA-1421 with a summary of #482."

The model itself never speaks MCP. The host’s harness translates the model’s tool calls into MCP requests, dispatches them to the right server, and feeds the results back into the context. (This is the same harness loop covered in What is an agent harness? — MCP is one tidy way to populate the “tools” part of that loop.)

A few mechanical details worth knowing:

Transports. The protocol is transport-agnostic, but two transports dominate in practice: stdio (the host launches the server as a subprocess and talks over its standard input/output — perfect for local tools like a filesystem server) and HTTP with server-sent events (for remote servers, hosted somewhere else). The protocol payload is the same either way; only the pipe changes.

Capability negotiation. The handshake exchanges what each side supports — does the server stream resource updates? does the client support sampling (letting the server ask the model a sub-question)? — so both sides know what’s safe to use.

Security is mostly the host’s job. A server can advertise a delete_everything tool; whether the model is allowed to call it, whether the user gets a confirmation prompt, whether the call is logged — all of that lives in the host’s permission layer, not in MCP itself. The protocol is an interoperability standard, not a sandbox. This is worth internalizing before you install a random MCP server you found online.

It is not magic — it’s mostly boring. Reading the spec, the protocol is a small set of JSON-RPC methods (initialize, tools/list, tools/call, resources/list, resources/read, prompts/list, prompts/get, plus notifications). The power is not in the protocol’s cleverness; it’s in everyone agreeing to speak the same boring thing.

Going deeper

A note on what I’m sure of: MCP was introduced and open-sourced by Anthropic in late 2024 and the protocol is built on JSON-RPC 2.0 — those are documented in the spec. The exact set of clients and servers shipping in production changes month to month, so treat any specific roster (“X supports MCP”) as a thing to verify against current docs rather than a snapshot to memorize.