Uncertainty Estimation Toward Safe Ai
Degree type
Graduate group
Discipline
Subject
calibration
prediction set
probably approximately correct
safety
uncertainty quantification
Computer Sciences
Funder
Grant number
License
Copyright date
Distributor
Related resources
Author
Contributor
Abstract
Safety critical AI systems interact with environments based on inductively learned predictors, which may not be always correct. To complement the incorrect predictions, quantifying uncertainty on predictions is crucial to guarantee the safety of the AI systems. The major challenge of uncertainty quantification is making theoretical guarantees for the correctness of uncertainty estimation in various environments. In this thesis, we propose novel approaches on quantifying uncertainty possibly with correctness guarantees under two major environments: (1) the i.i.d. environment, i.e., where the distributions of learning-time and inference-time are identical, and (2) the covariate shift environment, i.e., where covariate distributions of learning-time and inference-time can be different. In particular, we propose an algorithm to construct a set predictor over labels, i.e., a prediction set, that satisfies a probably approximately correct (PAC) guarantee under the i.i.d. and covariate shift environments, while the prediction set size is small enough to be useful as an uncertainty quantifier. Here, the guarantee on the covariate shift assumes the smoothness of covariate distributions. An alternative way to represent uncertainty is via the confidence of predictors; we also propose an algorithm to estimate the confidence of classifiers that comes with the PAC guarantee under the i.i.d. environment, and we rigorously estimate the confidence under the covariate shift environment. We demonstrate that the proposed approaches are practically useful over various tasks, including image classification, visual object detection, visual object tracking, false medical alarm suppression, state estimation in reinforcement learning, fast deep neural network inference, and safe reinforcement learning.
Advisor
Osbert Bastani