LR-201c · Module 1
Risk Discovery Techniques
3 min read
Risk identification is a discipline, not a brainstorm. The "what could go wrong?" meeting where people call out risks from memory is a start, but it is not a methodology. A methodology is repeatable, exhaustive, and structured — it catches risks that memory would miss because it does not depend on anyone having thought of the risk before.
- Failure Mode Analysis For every component in the AI system, ask: how can this fail? What happens when the data pipeline delivers corrupted data? What happens when the model confidence drops below threshold? What happens when the API response times exceed SLA? Walk every component through its failure modes systematically. The exercise is tedious. The gaps it reveals are not.
- Stakeholder Impact Mapping Identify every person or group affected by the AI system's outputs — direct users, downstream consumers, data subjects, and the general public. For each stakeholder, ask: what harm could this system cause them? A hiring AI affects candidates. A credit scoring AI affects applicants. A content generation AI affects audiences. Each stakeholder group has different risk exposure.
- Scenario Modeling Construct specific, detailed scenarios of AI failure and trace the consequences. "The model recommends a candidate who files a discrimination claim. What is the liability chain? What evidence exists? What contract provisions apply?" Scenarios make abstract risks concrete and testable.
- Historical Pattern Analysis Study documented AI failures in your industry and adjacent industries. The risks are not theoretical — they have already materialized elsewhere. Amazon's recruiting AI that discriminated against women. Healthcare algorithms that underserved minority populations. Each case is a risk pattern that your assessment should address.