ACTIVE LEARNING OF VISION-BASED REPRESENTATIONS FOR ROBOTICS

Loading...
Thumbnail Image
Degree type
Doctor of Philosophy (PhD)
Graduate group
Computer and Information Science
Discipline
Electrical Engineering
Data Science
Computer Sciences
Subject
Active Learning
Active Perception
Exploration-Exploitation Tradeoffs
Navigation
Representation Learning
Funder
Grant number
License
Copyright date
2023
Distributor
Related resources
Author
Bucher, Bernadette, Kathleen
Contributor
Abstract

Intelligent autonomous robots should be capable of operating in environments designed for humans to complete routine tasks such as common household chores. Robots should also be able to mimic human behavior and not require detailed prior knowledge of every environment in order to operate. These types of tasks require different levels of decision making and reasoning skills to both jointly reason about the task and environment and then successfully execute the derived plan. To build robots which can perform complex hierarchical decision making for broad sets of skills in diverse and novel settings, this work presents a perspective for actively learning visual representations. We demonstrate the usefulness of our strategy with a novel framework for navigating in unseen environments by explicitly predicting semantic and occupancy maps and leveraging the uncertainty over our map predictions to make navigation decisions. We demonstrate that actively learning a task-independent representation useful for a group of related navigation tasks enables the solution of these tasks with minimal task-specific training. We present methods for efficiently navigating to semantic targets (object goal navigation and multi-object navigation) and around obstacles (point goal navigation) in previously unseen indoor environments. We also present a novel adversarial active learning method for vision-based dynamics models with which we provide insights to performance and efficiency trade-offs for active learning strategies. Navigation results are demonstrated in the visually realistic environments of the Matterport3D dataset in the Habitat simulator. Adversarial active learning results are demonstrated in OpenAI Gym environments and in a domain transfer setting on a Baxter robot arm platform.

Advisor
Daniilidis, Kostas
Matni, Nikolai
Date of degree
2023
Date Range for Data Collection (Start Date)
Date Range for Data Collection (End Date)
Digital Object Identifier
Series name and number
Volume number
Issue number
Publisher
Publisher DOI
Journal Issue
Comments
Recommended citation