Building MCP Servers for Custom Tools
MCP servers extend Claude Code with custom tools and resources—build them once, reuse across any MCP-compatible AI assistant.
Claude Code is powerful out of the box, but it can't read your internal databases, query proprietary APIs, or access custom tooling. MCP (Model Context Protocol) solves this by providing a standardized way to expose tools, resources, and prompts to AI coding assistants. Build an MCP server once, and it works with Claude Code, OpenCode, Cursor, and any other MCP-compatible client. The protocol is essentially a plugin system for AI tools.
What MCP Provides
MCP servers expose three types of capabilities to AI assistants:
- Tools: Executable functions the AI can call (similar to function calling)
- Resources: Read-only data the AI can reference (files, database records, API responses)
- Prompts: Pre-configured system instructions for specific tasks
Think of it this way: Claude Code has a built-in filesystem, terminal, and HTTP client. MCP lets you add new capabilities to this toolbox—custom database connectors, internal API wrappers, proprietary tools—without waiting for the Claude Code team to implement them.
MCP Architecture
MCP uses a client-server model over stdio (standard input/output) for local development. The client (Claude Code) sends JSON messages to the server process, the server processes them and responds. This design allows:
- Language agnostic servers: Build in Python, TypeScript, Go, Rust—any language that can handle stdio
- Isolation: Each MCP server runs as a separate process with its own dependencies
- Low overhead: No HTTP server needed, just read/write to stdin/stdout
# Message flow (simplified)
Client → Server: {"jsonrpc": "2.0", "method": "tools/list", "id": 1}
Server → Client: {"jsonrpc": "2.0", "result": {"tools": [...]}, "id": 1}
Client → Server: {"jsonrpc": "2.0", "method": "tools/call", "params": {...}, "id": 2}
Server → Client: {"jsonrpc": "2.0", "result": {...}, "id": 2}The protocol is JSON-RPC 2.0 over stdio—simple enough to implement manually, but SDKs handle boilerplate for you.
Building Your First MCP Server
Start with the official SDK for your language. For TypeScript/JavaScript:
npm install @modelcontextprotocol/sdkCreate a minimal MCP server that exposes a simple tool:
// src/index.ts
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
CallToolRequestSchema,
ListToolsRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
// Create server instance
const server = new Server(
{
name: "example-server",
version: "0.1.0",
},
{
capabilities: {
tools: {},
},
}
);
// Register tools
server.setRequestHandler(ListToolsRequestSchema, async () => {
return {
tools: [
{
name: "get_weather",
description: "Get current weather for a location",
inputSchema: {
type: "object",
properties: {
location: {
type: "string",
description: "City name, e.g., 'San Francisco, CA'",
},
},
required: ["location"],
},
},
],
};
});
// Handle tool calls
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
if (name === "get_weather") {
// Call your API here
const weather = await fetchWeather(args.location);
return {
content: [
{
type: "text",
text: JSON.stringify(weather, null, 2),
},
],
};
}
throw new Error(`Unknown tool: ${name}`);
});
// Start server
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
}
main().catch(console.error);Compile and run:
npx tsc
node build/index.jsClaude Code will automatically discover and load this server if it's configured in your MCP settings.
Exposing Resources
Resources are read-only data the AI can reference. Use them for:
- Database records (customers, orders, products)
- API responses (cached data, analytics results)
- Files (documentation, specifications, configs)
- Computed data (reports, metrics, aggregations)
// Register resource handlers
server.setRequestHandler(ListResourcesRequestSchema, async () => {
return {
resources: [
{
uri: "db://customers",
name: "Customer Database",
description: "All customers in the system",
mimeType: "application/json",
},
],
};
});
server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
const { uri } = request.params;
if (uri === "db://customers") {
const customers = await db.customers.findMany();
return {
contents: [
{
uri,
mimeType: "application/json",
text: JSON.stringify(customers, null, 2),
},
],
};
}
throw new Error(`Unknown resource: ${uri}`);
});Claude Code references resources using @ syntax:
@db://customersThe AI automatically requests resource data when it needs it, keeping context usage low compared to dumping all data into the prompt.
Configuration and Discovery
Claude Code discovers MCP servers through configuration files:
// ~/.config/Claude/claude_desktop_config.json
{
"mcpServers": {
"weather": {
"command": "node",
"args": ["/path/to/weather-server/build/index.js"]
},
"github": {
"command": "python",
"args": ["/path/to/github-mcp/main.py"]
}
}
}Each server gets a unique name (key in mcpServers) and a command to start it. Claude Code launches each server process on startup and communicates via stdio.
Tip: Use absolute paths or paths relative to your home directory. Relative paths can cause issues when Claude Code runs from different working directories.
Real-World Example: Internal API Connector
Build an MCP server that connects to your internal API and exposes it as tools. This lets Claude Code query proprietary data without API keys or authentication:
// Internal API MCP server
const API_BASE_URL = process.env.INTERNAL_API_URL || "http://localhost:3000/api";
async function callInternalAPI(endpoint: string, method: string = "GET", body?: any) {
const response = await fetch(`${API_BASE_URL}${endpoint}`, {
method,
headers: {
"Content-Type": "application/json",
"Authorization": `Bearer ${process.env.INTERNAL_API_KEY}`,
},
body: body ? JSON.stringify(body) : undefined,
});
if (!response.ok) {
throw new Error(`API error: ${response.status}`);
}
return response.json();
}
server.setRequestHandler(ListToolsRequestSchema, async () => {
return {
tools: [
{
name: "get_user",
description: "Get user by ID",
inputSchema: {
type: "object",
properties: {
userId: { type: "string" },
},
required: ["userId"],
},
},
{
name: "create_order",
description: "Create a new order",
inputSchema: {
type: "object",
properties: {
userId: { type: "string" },
items: {
type: "array",
items: { type: "object" },
},
},
required: ["userId", "items"],
},
},
],
};
});
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
switch (name) {
case "get_user":
return {
content: [
{
type: "text",
text: JSON.stringify(await callInternalAPI(`/users/${args.userId}`)),
},
],
};
case "create_order":
return {
content: [
{
type: "text",
text: JSON.stringify(await callInternalAPI("/orders", "POST", args)),
},
],
};
default:
throw new Error(`Unknown tool: ${name}`);
}
});Now Claude Code can interact with your internal API seamlessly:
Claude: "Get the user with ID abc123 and create an order for them with 2 items"
[Automatically calls get_user and create_order tools]Error Handling and Validation
MCP servers should handle errors gracefully and return structured error responses:
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
try {
// Validate input
if (!args.userId || typeof args.userId !== 'string') {
return {
content: [{
type: "text",
text: JSON.stringify({
error: "Invalid input",
details: "userId is required and must be a string"
})
}],
isError: true
};
}
// Call API
const result = await callInternalAPI(`/users/${args.userId}`);
return {
content: [{
type: "text",
text: JSON.stringify(result)
}]
};
} catch (error) {
// Return error details
return {
content: [{
type: "text",
text: JSON.stringify({
error: error instanceof Error ? error.message : "Unknown error"
})
}],
isError: true
};
}
});The isError: true flag tells Claude Code that the tool call failed, allowing the AI to retry or adjust its approach.
Best Practices
- Validate inputs: Reject invalid requests early with clear error messages
- Use resources for read-only data: Tools for mutations, resources for queries
- Handle authentication internally: Don't require API keys in prompts
- Cache expensive operations: Cache API calls, database queries, or external service requests
- Document tools clearly: Descriptions should explain what the tool does, not how it works
- Test locally first: Use stdio testing tools to verify server behavior before connecting to Claude Code
- Limit context usage: Return only necessary data, filter and transform before sending to the AI
Advanced Patterns
Streaming responses: For long-running operations, stream results incrementally:
// Experimental streaming support
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const stream = new ReadableStream({
async start(controller) {
for await (const chunk of processLargeDataset()) {
controller.enqueue({ type: "text", text: JSON.stringify(chunk) });
}
controller.close();
}
});
return { content: [{ type: "resource", uri: "stream://results" }] };
});Prompts for consistent behavior: Pre-configure system instructions for specific tasks:
server.setRequestHandler(ListPromptsRequestSchema, async () => {
return {
prompts: [
{
name: "code-review",
description: "Review code for security issues and bugs",
arguments: [
{
name: "code",
description: "Code to review",
required: true,
},
],
},
],
};
});Prompts let users invoke consistent workflows with a single command.
When MCP Makes Sense
Build an MCP server when you need to:
- Expose internal systems to AI assistants without API keys
- Create reusable tooling that works across multiple AI platforms
- Integrate proprietary databases or APIs
- Add domain-specific capabilities (e.g., medical coding, legal analysis)
- Automate repetitive workflows in AI interactions
Don't build an MCP server for:
- Simple tasks that fit in a prompt (e.g., formatting JSON)
- One-off scripts you'll never reuse
- Operations that require human judgment or approval
- Tasks that existing tools already handle well (filesystem, HTTP, Git)
MCP is the standard for extending AI coding assistants. Build once, reuse everywhere. The protocol is simple, the SDKs handle boilerplate, and the integration with Claude Code is seamless. Start small—expose one tool, test it, then expand.
Advertisement
Explore these curated resources to deepen your understanding
Official Documentation
Tools & Utilities
Further Reading
Building MCP Servers: Custom Context for Claude Code
Comprehensive tutorial on building MCP servers
How to Use Model Context Protocol (MCP) with Claude
Step-by-step guide with practical examples
Claude Code MCP Servers: How to Connect, Configure, and Use Them
Guide to connecting and configuring MCP servers
Related Insights
Explore related edge cases and patterns
Advertisement