Fall 2019 Joint ITI/Science of Security Seminar Series

  • Posted on December 3, 2019 at 3:17 pm by Mike Prosise.
  • Categorized Events.
  • Comments are off for this post.

Formal Verification of End-to-End Deep Reinforcement Learning  slides | video
Yasser Shoukry, Assistant Professor, Resilient Cyber-Physical Systems Lab, Department of Electrical Engineering & Computer Science, University of California, Irvine
November 19, 2019, 3:00 p.m., CSL Auditorium (B02)

Abstract: From simple logical constructs to complex deep neural network models, Artificial Intelligence (AI)-agents are increasingly controlling physical/mechanical systems. Self-driving cars, drones, and smart cities are just examples of such systems to name a few. However, regardless of the explosion in the use of AI within a multitude of cyber-physical systems (CPS) domains, the safety, and reliability of these AI-enabled CPS is still an understudied problem. Mathematically based techniques for the specification, development, and verification of software and hardware systems, also known as formal methods, hold the promise to provide appropriate rigorous analysis of the reliability and safety of AI-enabled CPS. In this talk, I will discuss our work on applying formal verification techniques to provide formal verification of the safety of autonomous vehicles controlled by end-to-end machine learning models and the synthesis of certifiable end-to-end neural network architectures.

Predictable Autonomy for Cyber-Physical Systems  slides | video
Dr. Stanley Bak, Senior Research Scientist, Safe Sky Analytics
December 10, 2019, 3:00 p.m., CSL Auditorium (B02)

Abstract: Cyber-physical systems combine complex physics with complex software. Although these systems offer significant potential in fields such as smart grid design, autonomous robotics and medical systems, verification of CPS designs remains challenging. Model-based design permits simulations to be used to explore potential system behaviors, but individual simulations do not provide full coverage of what the system can do. In particular, simulations cannot guarantee the absence of unsafe behaviors, which is unsettling as many CPS are safety-critical systems. The goal of set-based analysis methods is to explore a system’s behaviors using sets of states, rather than individual states. The usual downside of this approach is that set-based analysis methods are limited in scalability, working only for very small models. This talk describes our recent process on improving the scalability of set-based reachability computation for LTI hybrid automaton models, some of which can apply to very large systems (up to one billion continuous state variables!). Lastly, we’ll discuss the significant overlap of techniques used for our scalable reachability analysis methods with set-based input/output analysis of neural networks.