LR-201b · Module 2

AI Impact Assessments

3 min read

Multiple frameworks — the EU AI Act, Colorado AI law, and emerging international standards — require impact assessments for high-risk AI systems. An AI impact assessment is a structured evaluation of what the AI system does, who it affects, what can go wrong, and what safeguards are in place. Think of it as the contract review for AI deployment — the document that proves you thought about the consequences before they happened.

  1. System Description What does the AI system do? What data does it process? What decisions does it make or inform? Who are the affected parties? The description must be specific enough that someone unfamiliar with the system can understand its scope, function, and reach.
  2. Risk Identification What can go wrong? Bias in outputs, errors in automated decisions, privacy violations, discriminatory impact, security vulnerabilities. For each risk, estimate severity and likelihood. The risk identification should be comprehensive, not optimistic — the purpose is to find problems, not to demonstrate safety.
  3. Mitigation Measures For each identified risk, what controls are in place? Human oversight requirements, bias testing protocols, data quality standards, incident response procedures. Every risk should have a corresponding mitigation. Unmitigated risks must be explicitly accepted with documented business justification.
  4. Monitoring Plan How will you know if the mitigation measures are working? What metrics will you track? How often will you reassess? The monitoring plan transforms the impact assessment from a point-in-time document into an ongoing compliance mechanism.