Engineering

What Are MCP Servers? A Developer Guide to the Model Context Protocol

Julia Mase13 min read
What Are MCP Servers? A Developer Guide to the Model Context Protocol

When Anthropic released the Model Context Protocol in late 2024, most people missed it. It looked like a niche spec for letting Claude read your filesystem. Eighteen months later, there are more than 10,000 MCP servers in the wild, every major AI client supports the protocol, and the Anthropic team describes MCP adoption as having crossed 97 million installs in March 2026.

This post is the guide I wish I had when I started poking at MCP seriously in January. I am going to explain what MCP servers actually are, why the protocol caught on so fast, the mental model I use to decide when to use one versus a raw API, and the short list of servers I actually have installed on my machine. If you are here to copy and paste a config, skip to the Best MCP servers to install section.

#The USB-C analogy is actually correct

Anthropic's official docs describe MCP like a USB-C port for AI applications. I rolled my eyes at the analogy the first time I read it. Then I tried to integrate Claude Code with Linear, Google Drive, GitHub, and a custom internal API all in the same afternoon, and I realized the analogy is exactly right.

Before MCP, every AI tool integration was its own plug. You wanted Claude to read a Google Doc? You needed a custom OAuth flow, a wrapper around the Docs API, a way to surface the data in a format the model could consume, and some logic to decide when to call it. You wanted Cursor to read your Notion workspace? Different wrapper, different API pattern, different integration. Every app built its own.

MCP took the integration layer and turned it into one standardized interface. An MCP server exposes data, tools, and prompts through a protocol. An MCP client (like Claude Code, Cursor, or VS Code) speaks that protocol and can consume anything on the other end. Build once, integrate everywhere. That is the actual promise, and for once in tech marketing the promise matches what happened.

#What an MCP server actually does

An MCP server exposes three things to an AI client.

Tools. Functions the AI can call. For a GitHub MCP server, tools might include create_pull_request, list_issues, get_file_contents. For a Stripe MCP server, tools include list_customers, create_refund, search_payments.

Resources. Data the AI can read. For a filesystem MCP server, resources are files and directories. For a Notion MCP server, resources are pages and databases.

Prompts. Pre-written prompt templates for common workflows that the client can surface as slash commands or shortcuts.

The client asks the server "what tools, resources, and prompts do you have?" The server responds with a schema. The client then decides, based on the user's request, which tools to call. When Claude Code wants to create a pull request, it calls the GitHub MCP server's create_pull_request tool, the server actually hits GitHub's API, and the result comes back to the model as part of the conversation.

This sounds simple because it is simple. The entire reason MCP blew up is that it is deliberately boring. It does not try to replace tool use, it does not try to be a workflow engine, it just standardizes how AI clients discover and call external capabilities.

#How the protocol works in practice

MCP servers talk to clients over two transports.

stdio is the default for local servers. The client spawns the MCP server as a subprocess and communicates over stdin and stdout. This is what you get when you install the official filesystem server or the GitHub server on your machine. Zero networking, zero auth, zero cloud. The server runs on your laptop and only your laptop can see it.

Streamable HTTP is for remote servers. The MCP server runs as an HTTP server, the client connects over the network, and the protocol flows over HTTP with streamed responses. This is how hosted MCP servers work, and it is how you would expose an MCP server behind an enterprise auth boundary.

The protocol itself is JSON-RPC under the hood. If you have ever implemented a Language Server Protocol extension, MCP will feel immediately familiar. Both protocols come from the same design tradition, and LSP veterans will notice the resemblance in how the handshake, capability discovery, and request-response patterns work.

#Best MCP servers to install today

After six months of running MCP in production, here are the ones I actually keep installed. This list excludes experimental hobby servers and focuses on things I reach for weekly.

My weekly MCP usage
My personal frequency, measured across four weeks of Claude Code sessions. Your mileage will vary with role and stack.

Filesystem is the first MCP server you should install. It is the official reference server from Anthropic. It gives your AI client secure, permissioned file operations with configurable access controls. Almost every other workflow builds on this being present.

GitHub is the second. It lets Claude Code or Cursor read your repo metadata, create and list issues, open PRs, comment on reviews, and work with branches. It is the single biggest productivity unlock for code-related work, because suddenly your AI can see your real development state instead of just the file you pasted.

Context7 is underrated. It fetches live documentation for any library you ask about. Instead of the model hallucinating based on whatever it saw in training data, Context7 fetches the current docs and gives them to the model as context. I use this every time I touch a library I have not used in six months.

SQLite is the one I did not expect to love. It gives the model a local database it can experiment with, prototype schemas against, and use for small internal tools. The key insight is that it is isolated from production, so letting the AI "drop table and see what happens" is actually safe. It turns the database from a thing you worry about into a thing you play with.

Slack, Linear, and Notion are the workflow servers. They let your AI client read messages, update tickets, and sync docs. I reach for these less often than the code-focused servers, but when I do, they save me a lot of copy-pasting.

Stripe is the one that made me nervous at first, because giving an AI access to your payments API sounds like a very bad idea. The trick is that Stripe's official MCP server has a well-defined scope and you control exactly which API keys it can use. In practice I only wire it up when I am debugging customer support tickets, and it makes that workflow much faster.

#Building your own MCP server

You can build a working MCP server in about an hour if you pick the right language. Python and TypeScript are the most common. Here is the shortest possible path for Python using the official SDK.

install.sh
uv init my-mcp-server
cd my-mcp-server
uv add mcp
server.py
from mcp.server import Server
from mcp.types import Tool, TextContent
 
app = Server("my-mcp-server")
 
@app.list_tools()
async def list_tools() -> list[Tool]:
    return [
        Tool(
            name="greet",
            description="Greet someone by name",
            inputSchema={
                "type": "object",
                "properties": {
                    "name": {"type": "string"}
                },
                "required": ["name"]
            }
        )
    ]
 
@app.call_tool()
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
    if name == "greet":
        return [TextContent(type="text", text=f"Hello, {arguments['name']}!")]
    raise ValueError(f"Unknown tool: {name}")
 
if __name__ == "__main__":
    from mcp.server.stdio import stdio_server
    import asyncio
    asyncio.run(stdio_server(app))

Register the server in your Claude Code config:

.claude/mcp.json
{
  "mcpServers": {
    "my-mcp-server": {
      "command": "uv",
      "args": ["run", "python", "server.py"],
      "cwd": "/path/to/my-mcp-server"
    }
  }
}

Restart Claude Code, and the greet tool is now available. That is the entire flow. From here, you replace greet with whatever function you want the model to be able to call, and the same pattern scales up to real integrations.

#When to use MCP vs a raw API

This is the question I get most often from developers who are new to the protocol. The short answer is that MCP is for when the AI model needs to be able to call the integration dynamically. A raw API call is for when your code needs to call the integration on a predictable schedule.

If you are writing an agent that might need to create a Linear ticket, look up a PR, or query a database based on what the user asks next, MCP is the right layer. The model discovers what is available at session start and decides when to call which tool.

If you are writing a nightly cron job that syncs data from one system to another on a fixed schedule, you do not need MCP. You need a script that calls the API directly. Adding MCP to a non-agentic workflow is pure overhead.

There is a full post on this exact decision in MCP vs API. The TLDR is: MCP exists so models can decide when to call integrations, not so humans can call them faster.

#Security and permissions

One thing the official docs do not emphasize enough is that MCP servers run with whatever permissions you give them. A filesystem MCP server pointed at your home directory can read your home directory. A GitHub MCP server with repo-write scopes can push to any repo you can push to.

My rules of thumb:

  1. Scope aggressively. Give the filesystem server a specific project directory, not your home folder. Give the GitHub server a specific repo scope, not org admin.
  2. Start with local stdio servers. Do not expose MCP servers over HTTP until you know exactly what auth model you are using.
  3. Audit the tools the model can see. Run /mcp in Claude Code to see which servers are active and what tools they expose. If something looks dangerous, disable it.
  4. Trust official servers first. The Anthropic-maintained reference servers and the first-party ones from GitHub, Stripe, and Google are audited. Random "my first MCP" repos are not.

#The ecosystem is moving fast

As of April 2026, the MCP ecosystem is expanding weekly. PulseMCP tracks more than 11,000 servers across every category. Anthropic's own stats put MCP installs at 97 million as of March 2026, up from near zero a year ago. Every major AI client (Claude, ChatGPT, VS Code, Cursor, JetBrains) now ships with MCP support.

The implication for developers is that MCP is now the default way to connect AI tools to external systems. If you are integrating anything new into your AI workflow, you should check whether an MCP server exists first before building a custom integration. Nine times out of ten, someone has already built it and it works better than what you would put together in an afternoon.

Try It Out

Practicing for AI-tooling interviews?

MCP is starting to show up in engineering interviews. Our AI interview prep covers agentic coding workflows including MCP integration patterns.

Start Free Session

#FAQ

Frequently asked questions

What is an MCP server?
An MCP server is a program that exposes tools, resources, and prompts to AI applications through the Model Context Protocol. It lets AI clients like Claude Code, Cursor, or ChatGPT call functions, read data, and use pre-built prompt templates from external systems. Think of it as a standardized adapter between an AI model and an external service.
How many MCP servers are there?
As of April 2026, there are more than 10,000 MCP servers in the public ecosystem according to directories like PulseMCP and MCP.so. Anthropic reports that MCP installs crossed 97 million in March 2026, making it one of the fastest-growing developer protocols of the decade.
Is MCP only for Claude?
No. MCP is an open protocol supported by Claude, ChatGPT, VS Code, Cursor, JetBrains IDEs, MCPJam, and many other clients. It was created by Anthropic in late 2024 and released as an open standard so any AI application could adopt it. Build once, integrate everywhere was the explicit design goal.
What is the difference between an MCP server and an API?
An API is a direct, predictable interface that you or your code calls by name. An MCP server is an adapter that lets an AI model discover available capabilities at session start and decide dynamically which tools to call based on what the user asks. MCP is for agentic workflows, APIs are for deterministic ones. See our full post on MCP vs API for more detail.
How do I build an MCP server?
The shortest path is the official Python or TypeScript SDK. You define tools with an inputSchema, implement a call_tool handler, and run the server over stdio transport. A minimal server takes about 50 lines of code and can be registered in Claude Code by adding it to the mcpServers section of your config file. More complex servers add resources and prompts.
Is it safe to run MCP servers?
It depends on what the server does and what permissions you give it. Local stdio servers from trusted sources are generally safe. HTTP servers need proper auth. The biggest risk is giving a server broader filesystem or API access than it needs. Scope aggressively, trust official servers first, and use the /mcp command in Claude Code to audit which servers are active.

#Sources

Related Posts