CM-301b · Module 2
The Evidence Protocol
4 min read
For the Evidence Skeptic, the only intervention that works is evidence. Not general evidence of AI capability. Not industry case studies. Not vendor references. Evidence calibrated to their specific concern, in the format that their profile can actually process. A High-C skeptic who is concerned about data privacy does not need a deck about AI ROI — they need a documented data governance framework with specific controls, audit trails, and regulatory mapping. Give them the wrong evidence and they will conclude you either do not understand their concern or cannot address it. Both conclusions damage the relationship.
- Identify the Specific Claim Being Doubted Before building an evidence package, identify the precise concern. "AI is risky" is not a specific claim — it is a category. "We don't have adequate controls for AI-generated content in a regulated environment" is specific. The evidence package is built to address the specific claim, not the category. Ask the skeptic directly: "What specifically would you need to see to be satisfied that this concern has been addressed?"
- Build the Evidence Package For High-C profiles: documented, cited, formatted. Not a slide deck — a document. With sources. With methodology. With analysis of alternatives considered. For High-D profiles: concise, outcome-focused, with clear implications for authority and decision-making. The same evidence formatted differently produces different outcomes.
- Present and Validate Present the evidence package and then explicitly ask for assessment: "Does this address your concern? What gaps remain?" Do not assume that providing evidence constitutes conversion. Confirm it. The skeptic who reviews the evidence package and says "this helps, but I still have concerns about X" has given you the next evidence requirement. Keep iterating until the specific concern is resolved or explicitly acknowledged as unresolvable.
{
"skeptic_name": "Sarah Chen",
"profile": "High-C",
"specific_concern": "Insufficient data governance controls for regulated environment",
"evidence_required": [
"Data classification framework with AI-specific controls",
"Regulatory mapping to HIPAA / SOX / GDPR as applicable",
"Audit trail documentation",
"Vendor security attestations (SOC 2, ISO 27001)",
"Incident response procedure for AI-specific failures"
],
"format": "Documented report with section headers, cited sources, appendices",
"presenter": "CISO + Initiative Lead (joint)",
"validation_question": "Does this address your concern about data governance?",
"follow_up_timeline": "2 weeks after document delivery"
}