RC-401d · Module 1

Risk Assessment Matrix

4 min read

Risk quantification in AI governance is not the same as risk quantification in traditional IT. Traditional risk matrices work with known failure modes — hardware fails, software has bugs, networks go down. AI risk includes all of those plus a category that did not exist five years ago: emergent behavior risk. The model does something nobody predicted, nobody tested for, and nobody knows how to reproduce. Your risk matrix must account for risks you cannot enumerate in advance, which means the scoring methodology matters more than the specific risks you list.

  1. Probability Scoring Assign probability on a 1-5 scale, but redefine the scale for AI-specific scenarios. A score of 1 is not "unlikely" — it is "no known mechanism for this failure mode." A score of 5 is not "certain" — it is "observed in production or reproducible in testing." The middle range matters most: a 3 means "theoretically possible and consistent with known model behavior." Most AI risks live at 3, which is exactly where organizations under-invest in mitigation.
  2. Impact Classification Impact must be multi-dimensional. Financial impact is obvious. Regulatory impact — fines, sanctions, license revocation — is quantifiable. Reputational impact is harder but critical: an AI system that generates biased output creates reputational exposure that exceeds the direct financial cost by an order of magnitude. [RISK]: single-dimension impact scoring consistently underestimates AI risk because the most damaging AI failures create cascading impacts across financial, regulatory, and reputational dimensions simultaneously.
  3. Composite Risk Scoring Probability multiplied by impact gives you a composite score, but the composite is only useful if the inputs are honest. Most organizations score optimistically — probability is rounded down, impact is scoped narrowly. The governance framework must include a calibration step: compare your scores against published incident databases, industry benchmarks, and the risk assessments of organizations that have already been through an AI audit. If your scores are consistently lower than the industry average, you are not less risky. You are less honest.

Do This

  • Score probability based on observed model behavior and known failure modes, not assumptions
  • Use multi-dimensional impact scoring: financial, regulatory, reputational, and operational
  • Calibrate your scores against industry benchmarks and published AI incident data
  • Re-score quarterly — AI risk profiles shift as models update, regulations change, and usage patterns evolve

Avoid This

  • Reuse your existing IT risk matrix without adapting it for AI-specific failure modes
  • Score reputational risk as zero because it is hard to quantify — hard to quantify is not the same as zero
  • Treat the risk matrix as a one-time deliverable — a static matrix is a snapshot of a moving target
  • Let optimism bias drive scoring — if every risk scores below 3, the methodology is the problem