Statistical Methods for Neural Networks
Degree type
Graduate group
Discipline
Statistics and Probability
Subject
Hypothesis testing
Longitudinal analysis
Mobile health
Neural networks
Random effects
Funder
Grant number
License
Copyright date
Distributor
Related resources
Author
Contributor
Abstract
Although traditionally used only for their powerful predictive capabilities, neural networks have utility beyond just prediction, and techniques from classical statistics can be quite useful in examining the inner workings of networks while expanding their capabilities. Here, we propose new methodology that applies conventional statistical concepts to neural networks to improve prediction, inference, and model selection. In Chapter 2, we combine a mixed-effects modeling framework with feed-forward neural networks to improve prediction of complex, longitudinal data. This mixed-effects neural network is particularly well-suited for modeling highly nonlinear longitudinal data with densely-measured follow-up periods that are common in mobile health studies. By leveraging both population- and subject-level effects, the model produces highly accurate predictions of future health outcomes for individual patients. In Chapter 3, we address the need for inferential methods for neural networks through the lens of hypothesis testing. Methods for robust inference that can be flexibly applied to a variety of network architectures extend the utility of machine learning beyond prediction and allows for subject matter knowledge to be gleaned from the estimated model fit. In Chapter 4, we develop an algorithm for hyperparameter tuning in neural networks that does not require validation data. Leveraging information from the partial derivative function of the network output with respect to network inputs, we offer an alternative to standard hyperparameter tuning methods that require withholding a portion of the data. Together, these methods aim to extend the applicability of neural networks in biomedical research.