AS-301g · Module 2
Tuning and False Positive Management
3 min read
A detection rule that produces ten false positives per day trains the analyst to ignore it. A detection rule that produces zero false positives and zero true positives is not detecting anything. Tuning is the iterative process of adjusting thresholds, refining patterns, and adding context until the rule produces actionable alerts — true positives that drive investigation without drowning the analyst in noise.
Do This
- Track the false positive rate for every detection rule and set a target — less than 10% is a reasonable goal for mature rules
- Tune aggressively in the first 30 days after deploying a new rule — the initial thresholds are estimates, not calibrations
- Add contextual exceptions — if a specific agent type legitimately produces long outputs, exclude it from the output length anomaly rule
Avoid This
- Disable noisy rules instead of tuning them — the rule might be detecting real threats underneath the false positives
- Accept high false positive rates as normal — alert fatigue is the leading cause of missed detections in SOC operations
- Set thresholds once and never revisit — as system behavior evolves, thresholds must evolve with it