Uncertainty Estimation Toward Safe Ai

Loading...
Thumbnail Image
Degree type
Doctor of Philosophy (PhD)
Graduate group
Computer and Information Science
Discipline
Subject
artificial intelligence
calibration
prediction set
probably approximately correct
safety
uncertainty quantification
Computer Sciences
Funder
Grant number
License
Copyright date
2022-09-09T20:21:00-07:00
Distributor
Related resources
Author
Park, Sangdon
Contributor
Abstract

Safety critical AI systems interact with environments based on inductively learned predictors, which may not be always correct. To complement the incorrect predictions, quantifying uncertainty on predictions is crucial to guarantee the safety of the AI systems. The major challenge of uncertainty quantification is making theoretical guarantees for the correctness of uncertainty estimation in various environments. In this thesis, we propose novel approaches on quantifying uncertainty possibly with correctness guarantees under two major environments: (1) the i.i.d. environment, i.e., where the distributions of learning-time and inference-time are identical, and (2) the covariate shift environment, i.e., where covariate distributions of learning-time and inference-time can be different. In particular, we propose an algorithm to construct a set predictor over labels, i.e., a prediction set, that satisfies a probably approximately correct (PAC) guarantee under the i.i.d. and covariate shift environments, while the prediction set size is small enough to be useful as an uncertainty quantifier. Here, the guarantee on the covariate shift assumes the smoothness of covariate distributions. An alternative way to represent uncertainty is via the confidence of predictors; we also propose an algorithm to estimate the confidence of classifiers that comes with the PAC guarantee under the i.i.d. environment, and we rigorously estimate the confidence under the covariate shift environment. We demonstrate that the proposed approaches are practically useful over various tasks, including image classification, visual object detection, visual object tracking, false medical alarm suppression, state estimation in reinforcement learning, fast deep neural network inference, and safe reinforcement learning.

Advisor
Insup Lee
Osbert Bastani
Date of degree
2021-01-01
Date Range for Data Collection (Start Date)
Date Range for Data Collection (End Date)
Digital Object Identifier
Series name and number
Volume number
Issue number
Publisher
Publisher DOI
Journal Issue
Comments
Recommended citation