MCP Server Deployment: Docker, Networking & Security Guide
Last updated: March 2026
Key Takeaways:
- MCP (Model Context Protocol) is a networking protocol, not an AI framework. It uses JSON-RPC 2.0 over standard transports like HTTP, Server-Sent Events, and stdio. If you already understand how REST APIs, WebSockets, or reverse proxies work, you already understand the fundamentals of MCP.
- Most MCP tutorials focus on writing server code in Python or TypeScript but completely ignore deployment, security, and infrastructure. This guide covers the networking layer underneath: Docker deployment, port management, TLS termination, reverse proxy configuration, authentication, and firewall rules.
- MCP is still in early adoption. The current SERP is dominated by SDK announcements and "hello world" demos. Practical deployment guides with proper security hardening are almost nonexistent, which means the people deploying MCP servers right now are mostly figuring it out alone.
What MCP Actually Is (In Networking Terms)
Model Context Protocol is an open standard created by Anthropic that defines how AI applications connect to external tools and data sources. If you strip away the AI terminology, what you are looking at is a client-server protocol built on familiar networking primitives.
An MCP client is any AI application that needs to use external tools — Claude Desktop, Cursor, OpenClaw, or a custom application. An MCP server is a process that exposes a set of tools, resources, and prompts through a standardized interface. The client discovers what the server offers, then calls those tools as needed.
The protocol itself is JSON-RPC 2.0 — the same structured request-response format used in Ethereum nodes, Language Server Protocol (the protocol behind VS Code's IntelliSense), and dozens of other systems. If you have ever debugged an LSP connection between an editor and a language server, you have already done MCP troubleshooting in spirit.
MCP supports three transport types:
stdio — The client launches the server as a child process and communicates over standard input/output. No network ports are opened. This is the simplest and most secure transport because the server never touches the network. Most local MCP servers use this.
HTTP with Server-Sent Events (SSE) — The client connects to the server over HTTP. Requests go from client to server as HTTP POST. Responses stream back via SSE (a persistent one-way HTTP connection from server to client). This is how remote MCP servers work — and this is where networking knowledge becomes critical, because you are now managing HTTP connections, ports, authentication, and potentially TLS.
Streamable HTTP — A newer transport that uses standard HTTP POST for both directions, with optional SSE for streaming. Simpler than the SSE transport for request-response patterns. Still requires the same networking infrastructure as any HTTP service.
The bottom line: if you run an MCP server that uses HTTP transport, you are running a web service. Everything you know about deploying, securing, and troubleshooting web services applies directly.
Why Networking Knowledge Matters More Than AI Knowledge Here
The current wave of MCP tutorials teaches you how to write server code — define tools in Python, handle requests, return responses. That part is straightforward. Where people are hitting walls is the deployment layer underneath.
Developers are posting about MCP connection pooling bottlenecks, fighting corporate firewalls with localtunnel workarounds, debugging transport timeouts, and trying to figure out why their Docker-deployed server is unreachable from Claude Desktop. These are networking problems, not AI problems. The AI model does not care about your port bindings or TLS certificates. It cares about receiving valid JSON-RPC responses within a timeout window.
If you already manage home network infrastructure — Docker containers, reverse proxies, firewall rules, DNS resolution — you have every skill needed to deploy MCP servers reliably. This guide connects those existing skills to the MCP-specific details you need to know.
What You Will Need
Hardware: An MCP server is lightweight. A Raspberry Pi 5 can run several MCP servers simultaneously without strain. If you are already running a home server or mini-PC for other services (Pi-hole, Home Assistant, Ollama), your MCP servers can share that hardware. [Raspberry Pi 5 Starter Kit on Amazon — affiliate link] [Mini-PC 32GB RAM on Amazon — affiliate link]
By shopping with our link, Modemguides may earn a small commission at no cost to you on qualifying items.
Software:
- Docker and Docker Compose
- A working AI application that supports MCP (Claude Desktop, Cursor, OpenClaw, or Claude Code)
- Python 3.10+ or Node.js 18+ (depending on which SDK you use)
- A reverse proxy if you plan to expose servers beyond localhost (Caddy recommended for automatic TLS)
Network isolation (recommended): If you are running MCP servers that handle sensitive data or connect to external services (email, databases, file systems), place them on an isolated network segment. A Firewalla or similar appliance makes this straightforward. [Firewalla on Amazon — affiliate link]
By shopping with our link, Modemguides may earn a small commission at no cost to you on qualifying items.
MCP Transport Types: A Networking Comparison
Understanding which transport to use is the first decision that most tutorials skip. Here is a comparison in networking terms you already know.
stdio transport is like a Unix pipe. The client spawns the server process and talks to it over stdin/stdout. No sockets, no ports, no network stack. It is the equivalent of piping one command into another on the command line. Use this for local tools that do not need network access. It is the default for Claude Desktop and Cursor integrations. Security: excellent — no attack surface beyond the process boundary.
SSE transport is like a REST API with a persistent event stream. Client sends HTTP POST requests to the server. Server sends responses and notifications back through an SSE connection (essentially a long-lived HTTP response that streams events). Use this for remote servers, shared servers, or servers that need to push notifications to the client. Security: same as any HTTP service — you need authentication, TLS, and proper port management.
Streamable HTTP transport is like a standard HTTP API with optional streaming. Simpler than SSE for most use cases. The client sends a POST request, the server responds — either as a single JSON response or as an SSE stream if the response is large. Use this for new deployments where you want simplicity. Security: same as SSE transport.
The rule of thumb: use stdio for local tools, use Streamable HTTP or SSE for anything that needs to be accessed over a network.
Step 1: Deploy an MCP Server with Docker
Most MCP servers in the wild are deployed in Docker containers. This is the right approach — containers provide process isolation, reproducible environments, and easy resource management. If you run any other Docker services on your home server, MCP servers fit into the same workflow.
1. Choose or build your MCP server. For this guide, we will use a generic example. The deployment principles are the same regardless of what the server does (file access, database queries, API integration, etc.).
2. Create a Dockerfile for your MCP server if one is not provided. A minimal Python MCP server Dockerfile looks like:
FROM python:3.12-slim WORKDIR /app COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt COPY . . EXPOSE 3000 CMD ["python", "server.py"]
3. Create a Docker Compose configuration with security hardening:
version: "3.8"
services:
mcp-server:
build: .
ports:
- "127.0.0.1:3000:3000"
environment:
- MCP_AUTH_TOKEN=your-random-token-here
restart: unless-stopped
security_opt:
- no-new-privileges:true
read_only: true
tmpfs:
- /tmp
deploy:
resources:
limits:
memory: 512M
cpus: "1.0"
Critical details: the port binding uses 127.0.0.1 so the server is only reachable from localhost. The security options prevent privilege escalation and make the filesystem read-only. Resource limits prevent a misbehaving server from consuming all system resources.
4. Start the server:
docker compose up -d
5. Verify the port binding:
ss -tlnp | grep 3000
Confirm 127.0.0.1:3000, not 0.0.0.0:3000.
Step 2: Configure Your MCP Client
With your server running, you need to tell your AI application how to connect to it. Each client has a slightly different configuration format, but the networking concepts are identical.
Claude Desktop
Edit the Claude Desktop configuration file (typically at ~/Library/Application Support/Claude/claude_desktop_config.json on macOS or %APPDATA%\Claude\claude_desktop_config.json on Windows). Add your server to the mcpServers section:
{
"mcpServers": {
"my-server": {
"url": "http://localhost:3000/mcp",
"headers": {
"Authorization": "Bearer your-random-token-here"
}
}
}
}
For stdio-based servers (no Docker needed), the configuration points to a command instead of a URL:
{
"mcpServers": {
"my-local-tool": {
"command": "python",
"args": ["/path/to/server.py"]
}
}
}
OpenClaw
OpenClaw connects to MCP servers through its skills system. Install the relevant MCP skill and point it at your server's localhost URL. See our OpenClaw + Home Assistant guide for the specific configuration steps with the ha-mcp skill.
Step 3: Port Management and Conflict Resolution
If you run multiple MCP servers — and most practical deployments will have several (one for files, one for email, one for a database, etc.) — port management becomes essential. This is no different from managing any other set of Docker services.
1. Assign each MCP server a unique port. A simple convention: start at 3000 and increment. Document your assignments.
Port 3000: File system MCP server
Port 3001: Email MCP server (note: conflicts with AnythingLLM's default port if you are running our zero-cost AI stack [link to zero-cost article] — change one of them)
Port 3002: Database MCP server
Port 3003: Calendar MCP server
2. Check for port conflicts before starting a new server:
ss -tlnp | grep 300
If the port is already in use, either change your MCP server's port or stop the conflicting service.
3. For stdio-based servers, port management is irrelevant — they do not open ports. This is one of the reasons stdio is preferred for local tools.
Step 4: Reverse Proxy for Remote Access
If you need to access an MCP server from a different machine — for example, running the server on your home server but connecting from Claude Desktop on your laptop — you have two options: SSH tunneling or a reverse proxy.
Option A: SSH Tunnel (Simpler)
Forward the MCP server's port through SSH:
ssh -L 3000:localhost:3000 your-server-ip
Then configure your MCP client to connect to localhost:3000 on your laptop. The connection is encrypted through SSH. The MCP server port stays bound to localhost on the server.
Option B: Reverse Proxy with TLS (Production-Grade)
For always-on access or if multiple clients need to reach the server, a reverse proxy with automatic TLS is more maintainable. Caddy is the simplest option because it handles TLS certificates automatically.
1. Install Caddy on your server.
2. Create a Caddyfile:
mcp.yourdomain.com {
reverse_proxy localhost:3000
tls internal
}
The tls internal directive generates a self-signed certificate for internal use. For public-facing deployments (not recommended for home setups), Caddy will automatically obtain a Let's Encrypt certificate.
3. Start Caddy:
caddy start
4. Update your MCP client configuration to use the proxied URL:
"url": "https://mcp.yourdomain.com/mcp"
The reverse proxy terminates TLS, so the connection between your client and the proxy is encrypted even on your local network.
Option C: Tailscale or WireGuard VPN
If you already run a Tailscale mesh or WireGuard VPN for remote home access, your MCP servers are automatically reachable from any device on the VPN without changing port bindings or adding a reverse proxy. Point your MCP client at the Tailscale IP of your server. This is the lowest-friction option if you have the VPN infrastructure already in place.
Step 5: Authentication and Authorization
MCP's specification supports OAuth 2.1 for authentication, but many servers in the current ecosystem ship without any auth at all. This is the same pattern we saw with OpenClaw's gateway — insecure defaults that depend on the user to lock down.
1. At minimum, implement bearer token authentication. Generate a random token:
openssl rand -hex 32
Configure your MCP server to require this token in the Authorization header of every request. Configure your MCP client to send it.
2. For servers exposed beyond localhost (through a reverse proxy or VPN), token auth is the minimum. Consider implementing OAuth 2.1 if your server handles sensitive operations or serves multiple users. The MCP Python and TypeScript SDKs both include OAuth middleware.
3. Use MCP tool annotations to control what each tool can do. Annotations let you mark tools as read-only (safe for any client to call) or write/destructive (requires explicit user confirmation). Not all clients enforce annotations yet, so treat them as defense in depth rather than your primary access control.
4. Store credentials in environment files with restrictive permissions. Do not hardcode tokens in configuration files committed to version control. Our AI security hardening guide [link to security article] covers credential management practices in detail.
Step 6: Firewall Rules for MCP
If you are running MCP servers on an isolated network segment (recommended), configure your firewall to allow only the specific traffic patterns MCP requires.
For stdio servers: No firewall rules needed. No ports are opened.
For HTTP/SSE servers bound to localhost:
sudo ufw default deny incoming sudo ufw allow ssh sudo ufw enable
No allow rules for MCP ports. The localhost binding handles access restriction. The firewall is defense in depth.
For HTTP/SSE servers behind a reverse proxy:
sudo ufw allow 443/tcp
Allow HTTPS on port 443 (where Caddy listens). Keep MCP server ports themselves locked to localhost. Only the reverse proxy should face the network.
For servers accessed via VPN:
Allow traffic from your VPN subnet only:
sudo ufw allow from 100.64.0.0/10 to any port 3000
(Adjust the subnet to match your Tailscale or WireGuard range.)
Troubleshooting Common MCP Connection Issues
Based on the most frequent complaints from developers deploying MCP servers, here are the issues you are most likely to encounter and how to resolve them.
Server unreachable from client
Check in this order: Is the server actually running? (docker compose ps) Is it listening on the expected port? (ss -tlnp | grep 3000) Is it bound to 0.0.0.0 or 127.0.0.1? If bound to localhost and the client is on a different machine, you need a tunnel, proxy, or VPN. Is the host firewall blocking the port? (sudo ufw status)
Connection timeout
MCP SSE connections are long-lived. If you have a reverse proxy, load balancer, or corporate firewall between client and server, check its timeout settings. Many proxies close idle connections after 30-60 seconds, but an SSE connection may be idle between events for longer than that. In Caddy, you can increase the timeout with read_timeout and write_timeout directives.
Connection pooling bottleneck
If multiple AI agents connect to the same MCP server simultaneously, the server may run out of connection capacity. MCP servers are typically single-threaded by default. For concurrent access, run multiple server instances behind a load balancer or increase the concurrency limit in your server framework. Alternatively, give each agent its own dedicated server instance on a different port.
Auth token rejected
Verify the token matches exactly between server configuration and client configuration. Check for trailing whitespace or newline characters in environment files. If using OAuth, verify the token has not expired and the scopes match what the server expects.
Docker networking issues
If your MCP server runs in Docker and needs to reach other services on the host (like Ollama on port 11434), use the Docker host networking bridge. In your Docker Compose file, add:
extra_hosts: - "host.docker.internal:host-gateway"
Then reference host.docker.internal instead of localhost when your MCP server calls Ollama or other host services.
MCP vs REST: What Is Actually Different
If MCP is "just" JSON-RPC over HTTP, why not use a regular REST API? This is a reasonable question, and understanding the answer helps you deploy MCP more effectively.
A REST API requires the AI model to know the exact endpoint structure, parameter names, and response format of every service it interacts with. This means building custom integrations for every tool. MCP replaces that with a discovery protocol: the client asks the server what tools are available, what parameters they take, and what they return. The AI model reads these descriptions and figures out how to use the tools on its own.
From a networking perspective, the differences are small. MCP uses JSON-RPC instead of REST conventions. MCP supports bidirectional communication through SSE (REST is typically request-response only). MCP has a built-in tool discovery mechanism (REST relies on external documentation like OpenAPI specs). MCP defines an auth framework (OAuth 2.1) as part of the specification rather than leaving it entirely to the implementer.
Everything else — port management, TLS, reverse proxying, firewall rules, container deployment, connection troubleshooting — is identical to what you would do for any HTTP-based service. That is why your networking skills transfer directly.
Ongoing Maintenance
Weekly: Verify all MCP server containers are running (docker compose ps). Check port bindings have not changed after updates (ss -tlnp). Review server logs for connection errors or unexpected requests.
Monthly: Pull updated server images (docker compose pull && docker compose up -d). Rotate auth tokens. Check MCP SDK changelogs for transport-layer changes that might affect your deployment. Review reverse proxy and firewall configurations.
Quarterly: Audit which MCP servers are installed and remove any you no longer use. Review tool annotations and permissions. Check whether new MCP transport types or security features have been released that you should adopt.
Frequently Asked Questions
Do I need to know Python or TypeScript to use MCP?
To write your own MCP server, yes — the official SDKs are in Python and TypeScript (with a new PHP SDK gaining traction). But to deploy, configure, secure, and troubleshoot MCP servers that someone else has written, you need Docker and networking skills more than programming skills. Many useful MCP servers are available as pre-built Docker images. Your job is deployment infrastructure, not server code.
Is MCP only for Claude, or does it work with other AI models?
MCP was created by Anthropic but it is an open protocol. Claude Desktop, Cursor, OpenClaw, Continue, Cline, and many other tools support it. The protocol does not care which AI model is on the other side of the client — it defines the transport and message format, not the AI behavior. Any application that implements the MCP client specification can connect to any MCP server.
Can I run MCP servers on a Raspberry Pi?
Yes. MCP servers are lightweight — they are essentially small web services. A Raspberry Pi 5 can comfortably run several MCP servers simultaneously. The Pi is a good choice for always-on MCP servers that handle low-traffic tasks like file access, calendar integration, or smart home control. For high-traffic or compute-intensive servers (like one that runs database queries on large datasets), a mini-PC with more RAM is a better fit.
What is the security risk of running MCP servers at home?
The primary risks are identical to running any other web service at home: exposed ports, missing authentication, and unencrypted connections. An MCP server bound to 0.0.0.0 without auth is reachable by every device on your network and potentially the internet. MCP adds one unique risk: tool poisoning, where a malicious server provides misleading tool descriptions to trick the AI model into executing harmful actions. Defend against this by only running MCP servers from trusted sources, reviewing tool descriptions, and using scanner tools like mcp-scan to check for known poisoning patterns. Our AI security hardening guide [link to security article] covers these threats in detail.
Should I use stdio or HTTP transport?
Use stdio for any MCP server that runs on the same machine as your AI client and does not need to be shared across devices. Stdio is simpler, more secure (no network exposure), and faster (no HTTP overhead). Use HTTP transport when the server needs to run on a different machine, serve multiple clients, or persist independently of any one client session. When in doubt, start with stdio and move to HTTP only when you have a specific reason to.
How does MCP relate to the other tools covered on modemguides?
MCP is the protocol layer that connects AI agents (like OpenClaw) to external tools and data. If you have set up OpenClaw with Home Assistant [link to OpenClaw article], the ha-mcp skill uses MCP to bridge OpenClaw and Home Assistant. If you are running our zero-cost AI agent stack [link to zero-cost article], n8n can act as an MCP client to trigger Ollama-powered workflows from external tools. MCP is the networking glue between these systems — and deploying it securely requires the same infrastructure skills this site has always covered for routers, firewalls, and home networks.

