BI-301g · Module 3

Trigger Detection Quality Metrics

3 min read

Trigger detection quality is measured across four dimensions. Detection rate: the percentage of trigger events that were detected before the response window closed. Measured retrospectively by reviewing account activity and identifying events that should have been detected. Target: 90%+ for Tier 1 accounts. False positive rate: the percentage of detected events that turned out to be non-events upon investigation. A high false positive rate wastes response resources on events that do not warrant engagement. Target: below 20%. Response timeliness: the percentage of detected events that received a response within the playbook-defined window. Target: 85%+ for critical and high-priority events. Response effectiveness: the percentage of trigger responses that produced a meaningful customer engagement (meeting, conversation, or information exchange). Target: 40%+ for critical events, 25%+ for high-priority events.

  1. Measure Detection Rate Quarterly Review all significant events across monitored accounts over the past quarter. For each event, determine whether it was detected by the monitoring system and, if so, how quickly. Events that were not detected reveal gaps in the signal source architecture or filtering criteria. Each missed event triggers a root cause analysis and a system adjustment.
  2. Track False Positive Rate Monthly Count the number of events that were classified as triggers but, upon investigation, did not represent meaningful customer world changes. A high false positive rate means the filtering criteria are too loose — tighten them. A very low false positive rate may mean the criteria are too tight — loosening slightly may capture events that are currently being missed.
  3. Measure Response Effectiveness Quarterly For every trigger response executed, track whether it produced customer engagement within 30 days. Responses that produce engagement validate the playbook. Responses that produce no engagement suggest either poor timing, poor content, or a misclassified trigger. Analyze non-responses to improve the playbook for next time.