RC-401i · Module 3
The Behavioral Audit: Identifying High-Risk User Archetypes Before Go-Live
4 min read
Not all users present the same risk profile. A behavioral audit identifies the specific user archetypes in your population who are most likely to use the system in ways that create legal, reputational, or operational risk — not because they are malicious, but because their workflow patterns, DISC profiles, and role incentives create predictable misuse patterns. The behavioral audit is a pre-launch activity. Discovering these archetypes after launch means you have already incurred the risk.
- Archetype 1: The Automator The Automator is a high-efficiency, deadline-driven user who will find the fastest path from input to output regardless of the intended workflow. They will skip review steps, feed bulk inputs without reading individual outputs, and configure the system in ways that maximize throughput at the cost of accuracy oversight. In roles with accountability for output quality — compliance review, legal drafting, clinical documentation — the Automator creates risk that is invisible until an error surfaces. Identify high-D and high-I DISC profiles in accountability roles. These are your most likely Automators. Design workflow checkpoints that are difficult to bypass without creating friction severe enough to trigger adoption resistance.
- Archetype 2: The Skeptic-Avoider The Skeptic-Avoider does not believe the system produces reliable outputs and will avoid using it while appearing to comply with adoption mandates. They maintain a parallel manual process and submit AI-generated outputs without reviewing them — using the tool as a documentation artifact rather than a decision support resource. The Skeptic-Avoider's outputs look adopted but are not. The risk is that when the AI output is wrong and goes to a client or regulator, the Skeptic-Avoider can truthfully say they did not rely on it — and the liability falls on the organization that mandated its use. Identify these users through output acceptance rate monitoring. Acceptance rates consistently at or near 100% from a user who vocally doubts the system quality is a red flag.
- Archetype 3: The Boundary Tester The Boundary Tester is curious, technically sophisticated, and will deliberately probe the system's limits. In security terms, they are your internal red team — valuable if formalized, dangerous if unmanaged. An informal Boundary Tester who discovers a prompt injection vulnerability, a data boundary violation, or an output that reveals more than intended will either report it responsibly or will not. The outcome depends entirely on the reporting culture you have established. Before go-live, create a formal channel for Boundary Testers: a responsible disclosure process for AI system issues that rewards discovery rather than punishing it.
- Archetype 4: The Over-Truster The Over-Truster accepts AI outputs without critical review, particularly in domains where the output sounds authoritative and aligns with their priors. This archetype is most dangerous in high-stakes decision contexts: medical, legal, financial. They do not bypass oversight because they are malicious — they bypass it because the output looks correct and they trust the system implicitly. The EU AI Act's human oversight requirements exist specifically because of this archetype. Design review workflows that make uncritical acceptance difficult: require explicit acknowledgment fields, surface confidence indicators, and flag when output complexity warrants senior review.