INSTITUTE FOR DEFENSE ANALYSES ALEXANDRIA VA ALEXANDRIA United States
The Department of Defense and Department of Homeland Security use many threat detection systems, such as air cargo screeners and counter-improvised-explosive-device systems. Threat detection systems that perform well during testing are not always well received by the system operators, however. Some systems may frequently cry wolf, generating false alarms when true threats are not present. As a result, operators lose faith in the systemsignoring them or even turning them off and taking the chance that a true threat will not appear. This article reviews statistical concepts to reconcile the performance metrics that summarize a developers view of a system during testing with the metrics that describe an operators view of the system during real-world missions. Program managers can still make use of systems that cry wolf by arranging them into a tiered system that, overall, exhibits better performance than each individual system alone.
Defense ARJ (Defense Acquisition Research Journal, 24, 1, 01 Jan 0001, 01 Jan 0001,