MCP Servers Are How Claude Actually Talks to Everything
- MCP servers expose tools, resources, and prompts to Claude through a tiny JSON-RPC protocol.
- Five MCP servers I run daily: GitHub, a scoped Filesystem, Playwright, Linear, and Postgres.
- An 80-line TypeScript server can wrap any script I already have, like my Shopify publisher.
- MCP still hurts on discovery, config UX, context bloat, OAuth, and the stdio versus HTTP split.
Most people skip MCP servers because the name sounds like something a platform team installs on a Friday. I skipped them for months for exactly that reason. Then I wired up three of them in an afternoon and my Claude Code sessions stopped feeling like I was copy-pasting context into a chat. Model Context Protocol is not infrastructure. It is a plugin system. An MCP server is a small process that exposes three things to Claude, and any MCP-aware client (Claude Code, Claude Desktop, the Agent SDK) can pick it up. This is the practical version: what MCP actually is, the five servers I run every day, an 80-line server I wrote to publish blog posts, and the parts that still hurt.
What an MCP Server Actually Does
An MCP server is a process. It speaks a small JSON-RPC protocol over stdio or HTTP. It exposes three primitives, and that is the whole mental model.
Tools. Callable functions. The client can invoke them with typed arguments and get a result. A tool might be `list_issues`, `query_database`, or `publish_blog_post`. Tools are verbs. When Claude decides to do something in the outside world, it calls a tool.
Resources. Read-only data the client can fetch. Files, database rows, a JSON blob, a rendered page. Resources are nouns. My blog registry is a resource. The current `.env.example` is a resource. Claude can read them without calling a tool every time.
Prompts. Reusable templates the server ships. A `code_review` prompt or a `write_commit_message` prompt the client can load into context. I use this one less often, but it is useful for servers that want to standardize how you talk to them.
That is it. Three primitives, one protocol, one process per server. A client connects to the server and gets back a list of what is on offer. The client then routes the model's tool calls to the right server.
Transport is either stdio (Claude Code launches the server as a child process and talks through pipes) or HTTP (the server runs somewhere, Claude connects over the network). Stdio is easier for local tools. HTTP is for anything you want to share across machines or run in a sandbox.
Config lives in one JSON file. For Claude Desktop on macOS it is `~/Library/Application Support/Claude/claude_desktop_config.json`. For Claude Code it is `~/.claude.json` or a project-level `.mcp.json`. Every server gets an entry with a command, args, and optional env vars.
Here is the mental shift. I used to build scripts and then tell Claude about them in a CLAUDE.md file. Now I wrap the script in an MCP tool and Claude discovers it. The script is the same. The wrapper is 20 lines. The payoff is that Claude can call it without me pasting the path every session.
The 5 MCP Servers I Actually Use
I have tried maybe 20 of them. Most I installed, used once, and removed. These five earn their slot.
GitHub MCP. The official one from the modelcontextprotocol org. I use it to triage PRs without leaving my terminal. Concrete task: I ask Claude to read the open PRs on raxxo-shop, summarize the review comments on the top one, and draft a reply. It fetches the PR, reads the diff, pulls the comment thread, and writes the response. I used to do this in 4 browser tabs. Now it is one prompt. I still open the browser to merge, because merging through a chat feels wrong to me.
Filesystem MCP (scoped). The default filesystem server gives access to a list of allowed directories. I run it with exactly two paths: `~/CLAUDE/RAXXOSTUDIOS` and `/tmp`. That is on purpose. If Claude tries to read `~/.ssh` it gets a clean error. The default examples show people exposing `$HOME`. Do not do that. Concrete task: "diff the CLAUDE.md files across all 17 RAXXO projects and tell me which rules are inconsistent." It reads all 17 files in parallel and returns the diff in 10 seconds.
Playwright MCP. Browser control through the Playwright server. I also run my own `agent-browser` CLI which is faster for dev loops, so I only reach for Playwright MCP when I want Claude to drive a real browser without me watching. Concrete task: "log into the Shopify admin (credentials in env), check if the lime button on the homepage is actually rendering at #e3fc02, screenshot it." It opens Chrome, logs in, samples the pixel, and saves the screenshot. For ten-step flows I still prefer agent-browser because it streams results, but Playwright MCP is the right choice when the flow is short.
Linear MCP. I picked Linear over Notion because my issue tracking lives there. Concrete task: "for every open issue tagged `launch`, pull the description, group them by product, and tell me which products are blocked." It runs one query, buckets the results, and I know within a minute what to work on. I have tried the same with Notion MCP and it works, but Notion's API is slower and the data model forces me to translate page-blocks to something useful. Linear's graph is flatter, the MCP server mirrors it, and my queries finish fast.
Postgres MCP. The read-only one. I point it at my analytics database. Claude never sees the password. It sees the connection through the server. Concrete task: "show me blog posts from the last 30 days where affiliate click-through rate is below 2%." One SQL query, no credentials in the conversation, no raw `psql` paste. When I want write access I configure a second server with a scoped role and a different URL. Two servers, two scopes, zero accidents.
Five servers, five jobs, no overlap. That is the rule I land on after trying more. One server per external system beats one big server that does everything.
Building My Own MCP Server in 80 Lines
The payoff of understanding the three primitives is that writing a server is small. Here is a real one I wrote last week. It exposes one tool that calls my existing Shopify blog publish script (I use Shopify as the backbone for raxxo.shop), and one resource that returns my blog registry as JSON.
// mcp-servers/raxxo-publish/index.ts
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
CallToolRequestSchema,
ListResourcesRequestSchema,
ListToolsRequestSchema,
ReadResourceRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
import { execFile } from "node:child_process";
import { readFile } from "node:fs/promises";
import { promisify } from "node:util";
const run = promisify(execFile);
const REGISTRY = `${process.env.HOME}/notes/index.json`;
const POST_SCRIPT = `${process.env.HOME}/bin/post.sh`;
const server = new Server(
{ name: "raxxo-publish", version: "0.1.0" },
{ capabilities: { tools: {}, resources: {} } }
);
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [
{
name: "publish_post",
description: "Publish a markdown file to a blog via the local CLI",
inputSchema: {
type: "object",
properties: { path: { type: "string" } },
required: ["path"],
},
},
],
}));
server.setRequestHandler(CallToolRequestSchema, async (req) => {
if (req.params.name !== "publish_post") throw new Error("unknown tool");
const path = req.params.arguments?.path as string;
const { stdout } = await run(POST_SCRIPT, [path]);
return { content: [{ type: "text", text: stdout }] };
});
server.setRequestHandler(ListResourcesRequestSchema, async () => ({
resources: [
{
uri: "notes://index",
name: "Notes Index",
mimeType: "application/json",
},
],
}));
server.setRequestHandler(ReadResourceRequestSchema, async (req) => {
if (req.params.uri !== "notes://index") throw new Error("unknown resource");
const text = await readFile(REGISTRY, "utf8");
return { contents: [{ uri: req.params.uri, mimeType: "application/json", text }] };
});
await server.connect(new StdioServerTransport());
That is the server. Under 80 lines with imports, and most of it is schema declarations. The actual logic is four handlers.
To wire it into Claude Code, I add one entry to the config:
{
"mcpServers": {
"raxxo-publish": {
"command": "tsx",
"args": ["/Users/me/CLAUDE/RAXXOSTUDIOS/mcp-servers/raxxo-publish/index.ts"]
}
}
}
Restart Claude Code, and the `publish_blog` tool shows up alongside everything else. I can now say "publish /tmp/blog-posts/mcp-servers-practical-guide.md" and Claude calls the tool. The blog registry resource is readable with no extra request, so Claude knows what has already been published before it writes anything new.
The takeaway is that if I have a working script, the wrapper is thirty minutes of work. If I do not have a working script, the MCP server was never the bottleneck.
Where MCP Still Hurts
I am not going to pretend this is all smooth. Five honest pain points.
Discovery is bad. There is no real marketplace. The best list I have found is the GitHub `awesome-mcp-servers` repo, which is community-maintained and inconsistent. Some servers link to archived repos. Some link to paid services that require 5 minutes of account setup before I can tell if the server is any good. I want a curated, dated, "last working test" list. Nobody has built that yet. Closest thing is the official registry at `modelcontextprotocol.io`, but it is thin.
Config UX is editor-first. Installing a server means editing a JSON file by hand. No one-click install. No version pinning. No rollback. I keep a git-tracked copy of my MCP config so that when a server breaks an update I can revert in one command. This should be built in.
Chatty servers bloat context. Some servers expose 40 tools when I needed 3. Every tool description goes into the model's context window. I have watched a fresh session burn 4k tokens on tool schemas before I said a word. The fix is to run the server with a filter flag or fork it, but neither is obvious from the README. Check the tool count before you install.
OAuth is rough. Services that need OAuth (Notion, Google, Slack) have a clumsy dance where the server asks you to open a browser, copy a token, paste it into a file. Refresh tokens are inconsistent. I have had three servers silently expire and fail for a day before I noticed. Until there is a standard auth flow, OAuth-requiring servers are the ones most likely to eat my afternoon.
stdio versus HTTP causes footguns. A server written for stdio cannot be shared across machines without a wrapper. A server written for HTTP cannot easily access local files. I have written the same server twice with different transports because I misjudged which one I would need six months later. Pick the transport that matches where the server runs (local tools stdio, remote services HTTP) and do not try to be clever.
None of this is fatal. MCP is a year old. The protocol is stable, the clients are growing, and the servers keep improving. The friction is real and fixable. I bring it up because most articles talk about MCP like it is done. It is not. It is useful, and it is undercooked in specific places.
Bottom Line
MCP servers are the plugin system Claude actually has. Three primitives (tools, resources, prompts), one protocol, one process per server. If you know how to write a 50-line Node script, you can ship an MCP server by dinner.
The wins are real. I publish blog posts without pasting paths. I read issues without switching tabs. I query my database without handing over credentials. Each server replaces a tiny tax I used to pay every session, and the taxes compound.
The friction is also real. Discovery, config, context bloat, OAuth, transport choice. Budget an afternoon for your first server, and another afternoon three months later when something breaks and you have to figure out why. Start with the five I listed, write your first wrapper around a script you already trust, and do not install a sixth server until you have used the first five for a week.
This article contains affiliate links. If you sign up through them, I may earn a small commission at no extra cost to you. (Ad)
Back to all articles