Accession Number:



Artificial Intelligence for Decision Emulation (Medic-AIDE): FY19 Biomedical Sciences and Technologies Line-Supported Program

Descriptive Note:

[Technical Report, Formal Report]

Corporate Author:

MIT Lincoln Laboratory

Personal Author(s):

Report Date:


Pagination or Media Count:



Precise estimation of uncertainty in predictions for AI systems is a critical factor in ensuring trust and safety. Replicating and enhancing experts decisions while quantifying uncertainty in predictions is a challenging problem. Uncertainty-aware AI for safety-critical domains such as healthcare, autonomous navigation and cybersecurity is a requirement. In particular, when AI is used to emulate decisions of medical experts in the field, AI confidence needs to be measured and plays a key role in making effective triage decisions and choosing appropriate treatment options. While various aspects of deep learning, such as achieving high accuracy and optimizing architectures are maturing, precise predictive uncertainty estimation remains a subject of on-going research efforts. Conventional neural networks tend to be overconfident as they do not account for uncertainty during training. In contrast to Bayesian neural networks that learn approximate distributions on weights to infer prediction confidence, we propose a novel method, Information Robust Dirichlet networks, that provides accurate uncertainty estimates while maintaining high prediction accuracy. Properties of the new cost function are derived to indicate how improved uncertainty estimation is achieved. Experiments using real medical datasets on heart arrhythmia diagnosis and AI-assisted pre-hospital triage show that our technique outperforms state-of-the-art neural networks, by a large margin, for estimating predictive uncertainty.

Subject Categories:

  • Cybernetics
  • Medical Facilities, Equipment and Supplies

Distribution Statement:

[A, Approved For Public Release]