SCALABLE AND RISK-AWARE VERIFICATION OF LEARNING ENABLED AUTONOMOUS SYSTEMS

Loading...
Thumbnail Image
Degree type
Doctor of Philosophy (PhD)
Graduate group
Electrical and Systems Engineering
Discipline
Electrical Engineering
Computer Sciences
Subject
Funder
Grant number
License
Copyright date
01/01/2024
Distributor
Related resources
Author
Cleaveland, Matthew
Contributor
Abstract

As autonomous systems become more prevalent, ensuring their safety will become more and more important. However, deriving guarantees for these systems is becoming increasingly difficult due to the use of black box, learning enabled components and the growing range of operating domains in which they are deployed. The complexity of the learning-enabled components greatly increases the computational complexity of the verification problem. Additionally, the safety predictions from verifying these systems must be conservative. This thesis explores two high-level methods for verifying autonomous systems: probabilistic model checking and statistical model checking. Probabilistic model checking methods exhaustively analyze a model of the system to reason about its properties. These methods generally suffer from scalability issues, but if the abstraction is built correctly then the results will be provably conservative. On the other hand, statistical model checking methods draw traces from the system to reason about its properties. These methods don't suffer the scalability drawback of probabilistic model checking, but their guarantees are weaker and may not even be conservative. This thesis introduces methods for improving the scalability of verifying autonomous systems with probabilistic model checking methods and incorporating notions of conservatism into statistical model checking. On the probabilistic model checking side, this thesis first explores using engineering intuitions about systems to reduce probabilistic model checking complexity while preserving conservatism. Next, standard conservative probabilistic model checking techniques are used to synthesize runtime monitors that are conservative and lightweight. Finally, this thesis presents a run-time method for composing monitors of verification assumptions. Verification assumptions are critical for simplifying verification problems so that they become computationally feasible. For statistical model checking, this thesis first leverages a method called conformal prediction to bound the errors of trajectory predictors, which enables safe (i.e. conservative) planning in dynamic environments. Additionally, a method for producing less conservative conformal prediction regions in time series settings is developed. Then a method called risk verification is developed, which uses statistical methods to bound risk metrics of a system's performance. Risk metrics, which capture tail events of the system's performance, offer a statistical equivalent of worst case analysis.

Advisor
Lee, Insup
Pappas, George, J
Date of degree
2024
Date Range for Data Collection (Start Date)
Date Range for Data Collection (End Date)
Digital Object Identifier
Series name and number
Volume number
Issue number
Publisher
Publisher DOI
Journal Issue
Comments
Recommended citation