EC-301a · Module 2

The AI Risk Slide

5 min read

Every board AI presentation needs a risk slide. This is not optional. A board that does not see a risk slide will invent the risks themselves — and their version will be worse than yours because they are working from fear, not analysis. The risk slide is not a liability. It is the slide that demonstrates competence.

AI risk for board purposes falls into five categories. Operational risk: the AI fails to perform as expected and creates operational disruption. Data risk: proprietary or customer data is exposed through the AI system or its training process. Liability risk: AI-generated output causes harm to a customer, third party, or the organization. Regulatory risk: the organization runs afoul of emerging AI regulations (EU AI Act, state-level legislation, sector-specific requirements). Reputational risk: the AI initiative becomes public in a negative context — bias claims, accuracy failures, or misuse.

Each risk category needs three fields on the slide: likelihood (low, medium, high), impact (low, medium, high), and mitigation. The mitigation must be specific. "We will monitor the system" is not a mitigation. "Weekly accuracy audits with a defined accuracy threshold below which the system is suspended pending review" is a mitigation.

# AI Risk Analysis — [Initiative Name]

| Risk | Likelihood | Impact | Mitigation |
|------|------------|--------|------------|
| Operational: AI output quality below acceptable threshold | Medium | High | Weekly accuracy audits against defined KPI baseline. System suspended if accuracy < 85% pending review. |
| Data: PII exposure through training or inference | Low | High | Automated PII filtering before any data processing. Quarterly data governance audit. Vendor DPA executed. |
| Liability: AI-generated content causes customer harm | Low | High | Human review required before any customer-facing output. Legal reviewed output approval workflow. |
| Regulatory: Non-compliance with AI regulations | Medium | Medium | Quarterly regulatory scan by outside counsel. Initiative paused if non-compliant requirements identified. |
| Reputational: Negative public disclosure of AI use | Low | Medium | External AI use policy published. Customer disclosure where required by law. Crisis communications plan drafted. |

## Residual Risk Statement
After mitigation, the primary residual risk is [Operational: accuracy degradation between audit cycles].
This risk is accepted as manageable given [the human review requirement at the final output stage].