MP-301b · Module 1

Schema Validation Testing

3 min read

Your JSON Schema definitions are the contract between the LLM and your tool. Testing the schema means verifying that valid inputs pass, invalid inputs are rejected with useful messages, and the schema accurately describes what your handler actually accepts. Schema bugs are insidious — a missing "required" field means the LLM can omit a critical parameter, and your handler receives undefined instead of a string. The schema test catches this before the LLM does.

Schema-handler drift is the most common source of tool bugs in mature servers. You update the handler to accept a new optional parameter but forget to add it to the schema, or you tighten validation in the handler but leave the schema permissive. Test both directions: generate test inputs from the schema (valid combos of required + optional fields) and verify the handler accepts them. Then generate invalid inputs that violate the schema and verify they are rejected before reaching the handler.

import { describe, it, expect } from "vitest";
import Ajv from "ajv";
import { toolDefinitions } from "../../src/tools.js";

const ajv = new Ajv({ allErrors: true });

describe("tool schema validation", () => {
  for (const tool of toolDefinitions) {
    describe(tool.name, () => {
      const validate = ajv.compile(tool.inputSchema);

      it("has a description under 500 chars", () => {
        expect(tool.description.length).toBeLessThan(500);
        expect(tool.description.length).toBeGreaterThan(20);
      });

      it("has required fields defined", () => {
        const required = tool.inputSchema.required ?? [];
        for (const field of required) {
          expect(tool.inputSchema.properties).toHaveProperty(field);
        }
      });

      it("every property has a description", () => {
        const props = tool.inputSchema.properties ?? {};
        for (const [key, prop] of Object.entries(props)) {
          expect((prop as { description?: string }).description,
            `Property "${key}" missing description`
          ).toBeTruthy();
        }
      });

      it("rejects empty object when required fields exist", () => {
        const required = tool.inputSchema.required ?? [];
        if (required.length > 0) {
          expect(validate({})).toBe(false);
        }
      });

      it("accepts a valid example input", () => {
        // Each tool should export a validExample for testing
        if (tool.validExample) {
          expect(validate(tool.validExample)).toBe(true);
        }
      });
    });
  }
});

Do This

  • Test that every schema property has a description — the LLM reads these
  • Validate that required fields match actual handler expectations
  • Generate both valid and invalid inputs to test schema boundaries
  • Export validExample objects alongside tool definitions for automated testing

Avoid This

  • Skip schema tests because "the SDK validates for us" — the SDK validates structure, not semantics
  • Let schema and handler drift apart — test both accept/reject the same inputs
  • Write descriptions over 500 characters — they consume context tokens and dilute the signal
  • Define required fields that the handler treats as optional, or vice versa