Date of Award

2020

Degree Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

First Advisor

Alejandro Ribeiro

Abstract

The theme of this dissertation is machine learning on graph data. Graphs are generic models of signal structure that play a crucial role in tackling problems in a diverse array of fields, including smart grids, sensor networks, and robot swarms. Thus, developing machine learning models that can successfully learn from graph data is a promising area of research with high potential impact.

This dissertation focuses particularly on the topic of graph neural networks (GNNs) as the main machine learning model for successfully addressing problems involving graph data. GNNs are nonlinear representation maps that exploit the underlying graph structure to improve learning and achieve better performance. One of the key properties of GNNs is that they are local and distributed mathematical models, making them particularly relevant for problems involving physical networks.

The overarching objective of this dissertation is to characterize the representation space of GNNs. This entails several research directions. First, we define a mathematical framework that provides the general tools and lays the groundwork for the analysis and design of concrete GNN models. Second, we derive fundamental properties and theoretical insights that serve as a foundation for understanding the success observed when employing GNNs in practical problems involving graph data. Third, we explore new application domains that are naturally suited for the use of GNNs based on the properties that these exhibit.

We leverage graph signal processing (GSP) and its key concepts of graph filtering and graph frequency domain to provide a general mathematical framework for characterizing GNNs. We derive the properties of permutation equivariance and stability to perturbations of the graph support and use these to explain the improved performance of GNNs over linear graph filers. We also show how these two properties help explain the scalability and transferability of GNNs. We explore the use of GNNs in learning decentralized controllers and showcase their success in the problem of flocking.

Share

COinS