Carpenter, Taylor J.

Email Address
ORCID
Disciplines
Research Projects
Organizational Units
Position
Introduction
Research Interests

Search Results

Now showing 1 - 4 of 4
  • Publication
    Verifying the Safety of Autonomous Systems with Neural Network Controllers
    (2020-12-01) Ivanov, Radoslav; Carpenter, Taylor J.; Weimer, James; Alur, Rajeev; Pappas, George; Lee, Insup
    This paper addresses the problem of verifying the safety of autonomous systems with neural network (NN) controllers. We focus on NNs with sigmoid/tanh activations and use the fact that the sigmoid/tanh is the solution to a quadratic differential equation. This allows us to convert the NN into an equivalent hybrid system and cast the problem as a hybrid system verification problem, which can be solved by existing tools. Furthermore, we improve the scalability of the proposed method by approximating the sigmoid with a Taylor series with worst-case error bounds. Finally, we provide an evaluation over four benchmarks, including comparisons with alternative approaches based on mixed integer linear programming as well as on star sets.
  • Publication
    ModelGuard: Runtime Validation of Lipschitz-continuous Models
    (2021-07-01) Carpenter, Taylor J.; Ivanov, Radoslav; Lee, Insup; Weimer, James
    This paper presents ModelGuard, a sampling-based approach to runtime model validation for Lipschitz-continuous models. Although techniques exist for the validation of many classes of models, the majority of these methods cannot be applied to the whole of Lipschitz-continuous models, which includes neural network models. Additionally, existing techniques generally consider only white-box models. By taking a sampling-based approach, we can address black-box models, represented only by an input-output relationship and a Lipschitz constant. We show that by randomly sampling from a parameter space and evaluating the model, it is possible to guarantee the correctness of traces labeled consistent and provide a confidence on the correctness of traces labeled inconsistent. We evaluate the applicability and scalability of ModelGuard in three case studies, including a physical platform.
  • Publication
    Verisig 2.0: Verification of Neural Network Controllers Using Taylor Model Preconditioning
    (2021-07-01) Ivanov, Radoslav; Carpenter, Taylor; Weimer, James; Alur, Rajeev; Pappas, George; Lee, Insup
    This paper presents Verisig 2.0, a verification tool for closed-loop systems with neural network (NN) controllers. We focus on NNs with tanh/sigmoid activations and develop a Taylor-model-based reachability algorithm through Taylor model preconditioning and shrink wrapping. Furthermore, we provide a parallelized implementation that allows Verisig 2.0 to efficiently handle larger NNs than existing tools can. We provide an extensive evaluation over 10 benchmarks and compare Verisig 2.0 against three state-of-the-art verification tools. We show that Verisig 2.0 is both more accurate and faster, achieving speed-ups of up to 21x and 268x against different tools, respectively.
  • Publication
    Case Study: Verifying the Safety of an Autonomous Racing Car with a Neural Network Controller
    (2020-04-01) Ivanov, Radoslav; Carpenter, Taylor J.; Weimer, James; Alur, Rajeev; Pappas, George; Lee, Insup
    This paper describes a verification case study on an autonomous racing car with a neural network (NN) controller. Although several verification approaches have been recently proposed, they have only been evaluated on low-dimensional systems or systems with constrained environments. To explore the limits of existing approaches, we present a challenging benchmark in which the NN takes raw LiDAR measurements as input and outputs steering for the car. We train a dozen NNs using reinforcement learning (RL) and show that the state of the art in verification can handle systems with around 40 LiDAR rays. Furthermore, we perform real experiments to investigate the benefits and limitations of verification with respect to the sim2real gap, i.e., the difference between a system’s modeled and real performance. We identify cases, similar to the modeled environment, in which verification is strongly correlated with safe behavior. Finally, we illustrate LiDAR fault patterns that can be used to develop robust and safe RL algorithms.