MP-201a · Module 3

Deployment Strategies

4 min read

MCP servers have three deployment models, each with different tradeoffs. Local distribution packages your server as an npm or pip package that users install and run locally via stdio transport. The server runs on the user's machine with access to local files and resources — ideal for developer tools, code analysis, and file system operations. The downside is that every user manages their own installation, updates, and dependencies.

Container deployment wraps your server in a Docker image. This solves the dependency problem — the container includes everything the server needs, and users run it with a single docker command. The MCP client communicates with the container via stdio (docker run with -i flag) or HTTP (mapped port). Containers also enable resource isolation: you can limit CPU, memory, and network access at the container level, adding a security layer beyond what the MCP SDK provides.

Cloud deployment runs your server as a managed service — Cloudflare Workers, AWS Lambda, Google Cloud Functions, or a persistent server on a VM. The server uses HTTP transport, and clients connect to its public URL. This model is best for shared resources (team databases, API gateways, knowledge bases) where you want one server instance serving multiple users with centralized management, monitoring, and access control.

# Multi-stage build for an MCP server
FROM node:22-slim AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

FROM node:22-slim AS runtime
WORKDIR /app
COPY --from=build /app/dist ./dist
COPY --from=build /app/node_modules ./node_modules
COPY package.json ./

# Stdio transport: read from stdin, write to stdout
# Run with: docker run -i mcp-server
ENTRYPOINT ["node", "dist/index.js"]

# For HTTP transport, expose the port instead:
# EXPOSE 3001
# ENTRYPOINT ["node", "dist/http-server.js"]

Do This

  • Use local/stdio for developer tools that need filesystem access
  • Use containers when your server has complex dependencies or needs isolation
  • Use cloud/HTTP for shared resources accessed by multiple users or teams
  • Pin your base image versions and lock dependencies for reproducible builds

Avoid This

  • Deploy a filesystem tool as a cloud server — it needs local access to be useful
  • Ship a server with unlocked dependencies — builds will break when a transitive dep updates
  • Run cloud MCP servers without authentication — treat them like any public API
  • Assume Docker overhead is negligible — container startup adds latency to stdio cold starts