Graph Neural Networks

Loading...
Thumbnail Image
Degree type
Doctor of Philosophy (PhD)
Graduate group
Discipline
Subject
Control
Fundamental Properties
Graph Signal Processing
Neural Networks
Representation
Robotics
Artificial Intelligence and Robotics
Computer Sciences
Electrical and Electronics
Funder
Grant number
License
Copyright date
2021-08-31T20:20:00-07:00
Distributor
Related resources
Author
Gama, Fernando
Contributor
Abstract

The theme of this dissertation is machine learning on graph data. Graphs are generic models of signal structure that play a crucial role in tackling problems in a diverse array of fields, including smart grids, sensor networks, and robot swarms. Thus, developing machine learning models that can successfully learn from graph data is a promising area of research with high potential impact. This dissertation focuses particularly on the topic of graph neural networks (GNNs) as the main machine learning model for successfully addressing problems involving graph data. GNNs are nonlinear representation maps that exploit the underlying graph structure to improve learning and achieve better performance. One of the key properties of GNNs is that they are local and distributed mathematical models, making them particularly relevant for problems involving physical networks. The overarching objective of this dissertation is to characterize the representation space of GNNs. This entails several research directions. First, we define a mathematical framework that provides the general tools and lays the groundwork for the analysis and design of concrete GNN models. Second, we derive fundamental properties and theoretical insights that serve as a foundation for understanding the success observed when employing GNNs in practical problems involving graph data. Third, we explore new application domains that are naturally suited for the use of GNNs based on the properties that these exhibit. We leverage graph signal processing (GSP) and its key concepts of graph filtering and graph frequency domain to provide a general mathematical framework for characterizing GNNs. We derive the properties of permutation equivariance and stability to perturbations of the graph support and use these to explain the improved performance of GNNs over linear graph filers. We also show how these two properties help explain the scalability and transferability of GNNs. We explore the use of GNNs in learning decentralized controllers and showcase their success in the problem of flocking.

Advisor
Alejandro Ribeiro
Date of degree
2020-01-01
Date Range for Data Collection (Start Date)
Date Range for Data Collection (End Date)
Digital Object Identifier
Series name and number
Volume number
Issue number
Publisher
Publisher DOI
Journal Issue
Comments
Recommended citation