PHYSICS-INFORMED MACHINE LEARNING: THEORY, ALGORITHMS AND APPLICATIONS

Loading...
Thumbnail Image
Degree type
Doctor of Philosophy (PhD)
Graduate group
Applied Mathematics and Computational Science
Discipline
Mathematics
Physics
Subject
Machine Learning
Scientific Computing
Funder
Grant number
License
Copyright date
2023
Distributor
Related resources
Author
Wang, Sifan
Contributor
Abstract

The remarkable potential of deep learning in areas from computer vision to natural language processing has now found profound implications in modeling and simulating physical systems. Central to these advancements is the emerging field of physics-informed machine learning, a fusion of physical principles with machine learning techniques. There are three predominant strategies to integrate physics: inductive bias, learning bias, and observational bias. Our study delves into the inherent challenges and limitations of physics-informed machine learning, particularly in the physics-informed neural networks (PINNs) and deep operator networks (DeepONet). Our research is driven by overcoming fundamental challenges and enhancing the performance of these frameworks. Firstly, we investigate the gradient flow of PINNs, identifying a training failure stemming from unbalanced back-propagated gradients. This insight motivates us to generalize the neural tangent kernel (NTK) theory to PINNs. With this tool, we theoretically reveal that the training of PINNs suffer from spectral bias, causality violation and discrepancy in convergence rate of loss term. To address these critical issues, we propose several simple yet effective loss re-weighting algorithms and network architecture and validate them across a wide range range of representative benchmarks in computational physics. Besides, we present an extension of PINNs framework for solving free boundary problems. Moreover, we highlight the data-intensive demands of training neural operators and the potential inconsistency of their predictions with the underlying physics. To resolve these challenges, we propose physics-informed DeepONet, introducing a simple and effective regularization mechanism for biasing the outputs of DeepONet models towards ensuring physical consistency. Based on that, we propose a autoreressive training algorithm for performing long-time integration of evolution equations. We also analyze the training dynamics of DeepONets through the lens of NTK theory, uncovering a bias that favors the approximation of functions with larger magnitudes. Therefore, we propose a point-wise loss re-weighting algorithm to correct this bias and a novel network architecture that is more resilient to vanishing gradient pathologies. We leverage the proposed physics-informed DeepONet to build fast and differentiable surrogates for rapidly solving PDE-constrained optimization problems, even in the absence of any paired input-output training data. In summary, this thesis provides in-depth exploration into training, improving and applications aspects of physics-informed machine learning, paving a new way to developing scientific machine learning algorithms with better robustness and accuracy guarantees, as needed for many critical applications in computational science and engineering.

Advisor
Perdikaris, Paris
Date of degree
2023
Date Range for Data Collection (Start Date)
Date Range for Data Collection (End Date)
Digital Object Identifier
Series name and number
Volume number
Issue number
Publisher
Publisher DOI
Journal Issue
Comments
Recommended citation