MP-301b · Module 2
MCP Inspector Deep Dive
4 min read
The MCP Inspector is more than a debugging tool — it is your primary integration test harness during development. Running npx @modelcontextprotocol/inspector launches an interactive UI that connects to your server and provides three critical views: the tool catalog (are your definitions rendering correctly?), the call interface (can you invoke every tool with test inputs?), and the raw JSON-RPC panel (are the protocol messages well-formed?). The raw panel is where you catch protocol bugs that unit tests miss — malformed content arrays, missing isError flags, and incorrect response schemas.
Build an Inspector testing checklist that you run before every release. For each tool: verify the description is clear and complete, confirm all schema properties render with descriptions, call with valid inputs and check the response format, call with invalid inputs and verify error messages are actionable, check the raw JSON-RPC for protocol compliance. This manual pass takes 5 minutes per tool and catches the bugs that automated tests miss — especially description quality and error message clarity, which are hard to assert in code.
For automated Inspector-style testing, use the MCP SDK's client library to replicate what the Inspector does programmatically. Call client.listTools() and assert on the shape of every tool definition. Call client.callTool() with both valid and invalid inputs. Parse the responses and verify structure. This is your protocol-level regression suite — it catches regressions in tool definitions, response formats, and error handling that unit tests on isolated handlers would miss.
import { describe, it, expect, beforeAll, afterAll } from "vitest";
import { createTestPair } from "../helpers/fixture-server.js";
describe("protocol compliance", () => {
let client: Awaited<ReturnType<typeof createTestPair>>["client"];
let cleanup: () => Promise<void>;
beforeAll(async () => {
({ client, cleanup } = await createTestPair());
});
afterAll(() => cleanup());
it("lists all expected tools", async () => {
const { tools } = await client.listTools();
const names = tools.map(t => t.name);
expect(names).toContain("get_customer");
expect(names).toContain("search_customers");
expect(names).toContain("list_customers");
});
it("every tool has a description and valid schema", async () => {
const { tools } = await client.listTools();
for (const tool of tools) {
expect(tool.description, `${tool.name} missing description`).toBeTruthy();
expect(tool.inputSchema.type).toBe("object");
// Every property should have a description
const props = tool.inputSchema.properties ?? {};
for (const [key, prop] of Object.entries(props)) {
expect(
(prop as Record<string, unknown>).description,
`${tool.name}.${key} missing description`,
).toBeTruthy();
}
}
});
it("error responses have isError: true", async () => {
const result = await client.callTool({
name: "get_customer",
arguments: { customer_id: "INVALID" },
});
expect(result.isError).toBe(true);
expect(result.content).toHaveLength(1);
expect(result.content[0].type).toBe("text");
});
it("success responses have well-formed content array", async () => {
const result = await client.callTool({
name: "get_customer",
arguments: { customer_id: "CUS-001" },
});
expect(result.isError).toBeFalsy();
expect(result.content[0].type).toBe("text");
// Verify parseable JSON in the response
expect(() => JSON.parse(result.content[0].text)).not.toThrow();
});
});
- Run Inspector manually first Before writing automated tests, use the Inspector to understand your server's current behavior. Note any surprises — those become your first test cases.
- Build protocol compliance tests Test tool listing, description quality, schema completeness, error response format, and success response format. These tests are tool-agnostic and apply to every server.
- Add per-tool integration tests For each tool, test at least one valid call, one invalid call, and one edge case through the full client-server path using InMemoryTransport.