Learning Perceptual Prediction: Learning from Humans and Reasoning about Objects

Loading...
Thumbnail Image
Degree type
Doctor of Philosophy (PhD)
Graduate group
Computer and Information Science
Discipline
Computer Sciences
Electrical Engineering
Subject
Learning from demonstration
Machine Learning
Reinforcement Learning
Robotics
Video Prediction
Funder
Grant number
License
Copyright date
2023
Distributor
Related resources
Author
Schmeckpeper, Karl
Contributor
Abstract

Reasoning about the results of their actions is a critical skill for embodied agents. In this thesis, we study how robots can learn to predict the future from visual observations. We study two main questions. First, how can agents acquire sufficient data to train large prediction models? Second, what structure should we embed into these models? In this work, we address both questions. In order to acquire sufficient data, we demonstrate methods for learning prediction models and agent policies from the combination of human and robot data. By utilizing human data we are able to significantly increase the quantity of data available, improving prediction performance and task completion. We also investigate the question of what structure should be included in visual prediction models. We demonstrate an object-centric video prediction model that can learn to segment objects without any labels and show that the object-centric architecture allows for improved performance.

Advisor
Daniilidis, Kostas
Date of degree
2023
Date Range for Data Collection (Start Date)
Date Range for Data Collection (End Date)
Digital Object Identifier
Series name and number
Volume number
Issue number
Publisher
Publisher DOI
Journal Issue
Comments
Recommended citation