Build an MCP Server That Controls Your DevOps Tools with AI
Model Context Protocol (MCP) lets AI assistants like Claude control kubectl, Terraform, and AWS CLI directly. Here's how to build your own MCP server for DevOps automation.
Model Context Protocol (MCP) is the standard that lets AI assistants like Claude actually do things — not just talk about them. Instead of copying kubectl output into a chat window, you give the AI a tool that runs kubectl directly.
Here's how to build an MCP server that gives AI access to your DevOps tools.
What is MCP?
MCP (Model Context Protocol) is an open standard by Anthropic that defines how AI models communicate with external tools and data sources. Think of it as a USB-C port for AI — any MCP-compatible tool plugs into any MCP-compatible AI.
Claude / AI Assistant
↓ MCP Protocol
MCP Server (your code)
↓
kubectl / terraform / AWS CLI / your APIs
When Claude needs to check pod status, it calls your MCP server's get_pods tool. Your server runs kubectl get pods and returns the result. Claude reads it and responds intelligently.
What You'll Build
An MCP server with these DevOps tools:
get_pods— list pods in a namespaceget_pod_logs— fetch logs from a podget_events— show K8s eventsrun_kubectl— run arbitrary kubectl commands (with safety limits)terraform_plan— runterraform planin a directoryaws_describe_instances— list EC2 instances
Prerequisites
npm install -g @modelcontextprotocol/sdk
# or Python
pip install mcpYou'll need:
- Node.js 18+ or Python 3.10+
- kubectl configured with cluster access
- (Optional) Terraform, AWS CLI
Build the MCP Server (Node.js)
// devops-mcp-server.ts
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
CallToolRequestSchema,
ListToolsRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
import { exec } from "child_process";
import { promisify } from "util";
const execAsync = promisify(exec);
const server = new Server(
{ name: "devops-tools", version: "1.0.0" },
{ capabilities: { tools: {} } }
);
// Define the tools Claude can call
server.setRequestHandler(ListToolsRequestSchema, async () => {
return {
tools: [
{
name: "get_pods",
description: "List all pods in a Kubernetes namespace",
inputSchema: {
type: "object",
properties: {
namespace: {
type: "string",
description: "Kubernetes namespace (default: default)",
},
},
},
},
{
name: "get_pod_logs",
description: "Get logs from a Kubernetes pod",
inputSchema: {
type: "object",
required: ["pod_name"],
properties: {
pod_name: { type: "string", description: "Name of the pod" },
namespace: { type: "string", description: "Namespace" },
lines: { type: "number", description: "Number of log lines (default: 50)" },
},
},
},
{
name: "get_events",
description: "Get recent Kubernetes events, useful for troubleshooting",
inputSchema: {
type: "object",
properties: {
namespace: { type: "string" },
},
},
},
{
name: "terraform_plan",
description: "Run terraform plan in a directory",
inputSchema: {
type: "object",
required: ["directory"],
properties: {
directory: { type: "string", description: "Path to terraform directory" },
},
},
},
],
};
});
// Handle tool calls
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
try {
switch (name) {
case "get_pods": {
const ns = args?.namespace || "default";
const { stdout } = await execAsync(
`kubectl get pods -n ${ns} --no-headers -o wide`
);
return {
content: [{ type: "text", text: stdout || "No pods found" }],
};
}
case "get_pod_logs": {
const ns = args?.namespace || "default";
const lines = args?.lines || 50;
const { stdout } = await execAsync(
`kubectl logs ${args?.pod_name} -n ${ns} --tail=${lines}`
);
return {
content: [{ type: "text", text: stdout || "No logs found" }],
};
}
case "get_events": {
const ns = args?.namespace || "default";
const { stdout } = await execAsync(
`kubectl get events -n ${ns} --sort-by='.lastTimestamp' | tail -20`
);
return {
content: [{ type: "text", text: stdout }],
};
}
case "terraform_plan": {
const dir = args?.directory;
// Safety: only allow relative paths inside current dir
if (dir?.includes("..") || dir?.startsWith("/")) {
return {
content: [{ type: "text", text: "Error: absolute paths not allowed" }],
isError: true,
};
}
const { stdout, stderr } = await execAsync(
`cd ${dir} && terraform plan -no-color`,
{ timeout: 120000 }
);
return {
content: [{ type: "text", text: stdout + stderr }],
};
}
default:
return {
content: [{ type: "text", text: `Unknown tool: ${name}` }],
isError: true,
};
}
} catch (error: any) {
return {
content: [{ type: "text", text: `Error: ${error.message}` }],
isError: true,
};
}
});
// Start the server
const transport = new StdioServerTransport();
await server.connect(transport);
console.error("DevOps MCP server running");Connect to Claude Desktop
Add your server to Claude Desktop's config file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"devops-tools": {
"command": "node",
"args": ["/path/to/devops-mcp-server.js"],
"env": {
"KUBECONFIG": "/home/user/.kube/config"
}
}
}
}Restart Claude Desktop. You'll see a hammer icon indicating tools are available.
Python Version (Simpler)
# devops_mcp_server.py
import asyncio
import subprocess
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp import types
app = Server("devops-tools")
@app.list_tools()
async def list_tools() -> list[types.Tool]:
return [
types.Tool(
name="get_pods",
description="List Kubernetes pods in a namespace",
inputSchema={
"type": "object",
"properties": {
"namespace": {"type": "string", "default": "default"}
}
}
),
types.Tool(
name="get_pod_logs",
description="Get logs from a pod",
inputSchema={
"type": "object",
"required": ["pod_name"],
"properties": {
"pod_name": {"type": "string"},
"namespace": {"type": "string", "default": "default"},
"lines": {"type": "integer", "default": 50}
}
}
),
]
@app.call_tool()
async def call_tool(name: str, arguments: dict) -> list[types.TextContent]:
if name == "get_pods":
ns = arguments.get("namespace", "default")
result = subprocess.run(
["kubectl", "get", "pods", "-n", ns, "-o", "wide"],
capture_output=True, text=True
)
return [types.TextContent(type="text", text=result.stdout or result.stderr)]
elif name == "get_pod_logs":
pod = arguments["pod_name"]
ns = arguments.get("namespace", "default")
lines = arguments.get("lines", 50)
result = subprocess.run(
["kubectl", "logs", pod, "-n", ns, f"--tail={lines}"],
capture_output=True, text=True
)
return [types.TextContent(type="text", text=result.stdout or result.stderr)]
return [types.TextContent(type="text", text=f"Unknown tool: {name}")]
async def main():
async with stdio_server() as (read, write):
await app.run(read, write, app.create_initialization_options())
if __name__ == "__main__":
asyncio.run(main())What You Can Now Do
Once connected, you can ask Claude natural language questions:
"Check what's in the default namespace and tell me if any pods are failing"
"Run a terraform plan in ./infrastructure/eks and summarize the changes"
"Get the last 100 logs from the api-server pod and identify the error"
Claude will call your MCP tools, read the output, and give you an intelligent response — no copy-pasting required.
Security Considerations
- Never allow arbitrary shell commands — whitelist specific commands
- No destructive operations — don't expose
kubectl deleteorterraform apply - Namespace scope — restrict to specific namespaces in production
- Audit logging — log every tool call with timestamp and arguments
- Read-only kubeconfig — use a ServiceAccount with only
get,list,watchpermissions
# Read-only ServiceAccount for MCP server
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: mcp-readonly
rules:
- apiGroups: [""]
resources: ["pods", "events", "logs"]
verbs: ["get", "list", "watch"]MCP servers are the future of DevOps tooling — instead of writing custom dashboards and scripts, you give your AI assistant direct, controlled access to your infrastructure. Build once, query in natural language forever.
For more on AI-driven DevOps automation, check out KodeKloud's DevOps AI labs.
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
Build a DevOps AI Agent with LangGraph on Kubernetes (2026)
Build a stateful DevOps agent using LangGraph that can plan multi-step infrastructure tasks, use tools, handle errors, and maintain conversation context — deployed on Kubernetes with a FastAPI interface.
Build a DevOps Automation Bot with LLM Function Calling (2026)
Use Claude or GPT-4o function calling to build a DevOps bot that can check pod status, scale deployments, query logs, and trigger pipelines — all from plain English commands in Slack or terminal.
AI-Driven Capacity Planning for Kubernetes Clusters (2026)
How to use AI and machine learning for Kubernetes capacity planning. Covers predictive autoscaling, cost optimization, tools like StormForge and Kubecost, and building custom ML models for resource forecasting.