Knowing What We Don't Know! Detecting and Learning Out-of-Distribution Data in the Open World

Loading...
Thumbnail Image
Degree type
Doctor of Philosophy (PhD)
Graduate group
Computer and Information Science
Discipline
Data Science
Electrical Engineering
Computer Sciences
Subject
Artificial Intelligence
Continual Learning
Machine Learning
Open World Maching Learning
Out-of-Distribution Detection
Robotics
Funder
Grant number
License
Copyright date
2024
Distributor
Related resources
Author
Gummadi, Meghna , Chowdary
Contributor
Abstract

Current Artificial Intelligent (AI) systems have seen remarkable success in various applications. However, this success has been limited to the closed-world assumption. These systems assume they will be deployed in stationary environments, only encountering In-Distribution(ID) instances drawn from the same distribution as their training data. On the contrary, current AI systems are no longer limited to stationary environments and have ventured into the dynamic, open world, where the number of concepts is unbounded and encountering novel, Out-of-Distribution (OOD) instances that lie beyond the training distribution is the norm. The success of these closed-world AI systems has yet to translate to the open world, where their expertise is lacking, leading to their catastrophic failure. For reliable operation in the open world, AI systems must recognize and adapt to its dynamic, uncertain nature. Detecting OOD instances and adapting to them by incrementally learning them would enable AI systems to be reliable and evolve continually in the open world, similar to biological systems. Drawing inspiration from our innate ability to identify and adapt to novel situations by reasoning and monitoring our cognition, this dissertation presents open-world learning frameworks that can reason over their uncertainty to monitor their performance to identify novelties and adapt to them. This dissertation investigates the two problems of open-world learning: (1) OOD detection and (2) continual learning to detect and incrementally learn OOD instances. For OOD detection, this dissertation investigates and presents feature representations that prevent feature collapse of ID and OOD data by imparting a specific structure to ID feature sets, which improves reliability for generating low-confidence scores for OOD instances. In addition, this work also shows the benefit of leveraging the context under which the network generates the scores for further improvement in OOD detection. Empirical evaluations across multiple benchmarks in the classification and segmentation domains show significant improvement over comparable OOD detection baselines and demonstrate generalization abilities. This dissertation also looks at OOD detection and continual learning together in a single feature representation and introduces a novel open-world learning framework. Evaluated in the classification domain, the framework enables OOD detection and accommodation in a natural continual learning loop. This dissertation's contributions to improvements in OOD detection and a novel unified framework for open-world learning provide significant advancements toward the successful and reliable use of AI systems in the open world.

Advisor
Eaton, Eric
Date of degree
2024
Date Range for Data Collection (Start Date)
Date Range for Data Collection (End Date)
Digital Object Identifier
Series name and number
Volume number
Issue number
Publisher
Publisher DOI
Journal Issue
Comments
Recommended citation