Departmental Papers (CIS)

Date of this Version

7-23-2021

Document Type

Conference Paper

Comments

Workshop on Uncertainty and Robustness in Deep Learning (ICML 2021), Virtually, July 23, 2021

Abstract

Deep neural networks (DNNs) are known to produce incorrect predictions with very high confidence on out-of-distribution inputs (OODs). This limitation is one of the key challenges in the adoption of DNNs in high-assurance systems such as autonomous driving, air traffic management, and medical diagnosis. This challenge has received significant attention recently, and several techniques have been developed to detect inputs where the model’s prediction cannot be trusted. These techniques detect OODs as datapoints with either high epistemic uncertainty or high aleatoric uncertainty. We demonstrate the difference in the detection ability of these techniques and propose an ensemble approach for detection of OODs as datapoints with high uncertainty (epistemic or aleatoric). We perform experiments on vision datasets with multiple DNN architectures, achieving state-of-the-art results in most cases.

Subject Area

CPS Safe Autonomy

Publication Source

Workshop on Uncertainty and Robustness in Deep Learning (ICML 2021)

Share

COinS
 

Date Posted: 20 October 2021

This document has been peer reviewed.