Guest Lectures

We are thrilled to introduce a distinguished lineup of guest lecturers who will be sharing their expertise and insights with us this semester. They will bring a wealth of knowledge and experience from various fields, including machine learning, formal methods, software engineering, cyber-physical systems, and other engineering domains.

These guest lectures are on Zoom only and open to all.

Vijay Ganesh, Professor, Georgia Tech (Nov 21)

At the Intersection of Reasoning and Learning: From Solvers to LLMs
Nov 21, Thursday, 2 – 3 pm CT
https://illinois.zoom.us/j/89436081158?pwd=50PdjanaJGMyhnGd1juvkmZcaKe1es.1

From its inception, AI has had two broad sub-fields, namely, reasoning and learning, with little interaction between them. In recent years, there is a growing recognition that if our goal is to solve problems at the cutting-edge of AI (trustworthy AI, AI for Science, AI for Math), then we need to bring these sub-fields together. In this talk, I will present techniques and results showing how machine learning (ML) can be used in service of automated reasoning (a la, SAT/SMT solvers), and in the reverse direction, how symbolic reasoning engines can be used to improve LLMs. The key idea in both directions is the same: the ML model is viewed as a synthesizer that generates assignments/code/proofs/molecules/equations, while the reasoning engine acts as a verifier that provides corrective feedback to the model at various points (training, fine-tuning, or inference) in its life cycle.

Stanley Bak, Assistant Professor, Stony Brook University (Nov 5)

Safe Autonomy for Cyber-Physical Systems Using Surrogate Verification
Abstract: Using autonomy within safety-critical applications demands strong assurances the system will not misbehave. Rather than direct analysis and verification, which can be hard, we instead explore using surrogate modeling. With surrogate modeling, we can create models that closely approximate system behaviors while being more amenable to formal verification methods that can prove system safety. We explore this strategy in two contexts: one for closed-loop neural network control system verification and one for approximating nonlinear dynamical systems using Koopman Operator approximations.
Nov 5, Tuesday, 2 pm – 3:20 pm CT
https://illinois.zoom.us/j/83268433434?pwd=1TbAVzcDHFI8VYSOjLZ3bLeoV6qiTg.1

Jingbo Wang, Assitant Professor, Purdue University (Oct 22)

Ensuring Trustworthy ML with Differential Verification
https://illinois.zoom.us/j/89843463765?pwd=yLCWW20XVGFbRWlXMVqWt3QvTZuz1v.1
Oct 22, Tuesday, 2 pm – 3:20 pm CT
Abstract:  Deep neural networks have become an integral component of many systems for which ensuring safety,  robustness and fairness is crucial. In this talk, we present several abstract interpretation based methods for efficient verification of a class of safety properties called differential properties. While we focus on neural network equivalence as the canonical example, other interesting properties concerning input sensitivity and stability can also be cast as differential properties. Our key insight is in deriving sound abstractions that relate the intermediate computations of two structurally-similar neural networks, to accurately bound their maximum difference over all inputs. Beyond safety properties, our differential verification techniques also provide guarantees for both demographic and individual fairness.

Bio: Jingbo Wang is an Assistant Professor in the Elmore Family School of Electrical and Computer Engineering at Purdue University. She earned her Ph.D. in Computer Science from the University of Southern California in August 2023. Following her Ph.D., she was a postdoctoral fellow in the Department of Computer Science at UT Austin. Her research lies at the intersection of software engineering and formal methods, focusing on developing rigorous program analysis and synthesis techniques to enhance the security, robustness, and fairness of software systems. She is the recipient of several awards, including the MIT EECS Rising Stars (2021), the Wise Merit Award (2021), and the ACM SIGPLAN Distinguished Paper Award (PLDI, 2023).

Samuel Chevalier, Assistant Professor, University of Vermont (Oct 24)

From NP-Complete to NP-Hard: Exploiting Mathematical Synergies between Electric Power Grid Operation and Machine Learning Verification
https://illinois.zoom.us/j/89843463765?pwd=yLCWW20XVGFbRWlXMVqWt3QvTZuz1v.1
Oct 24, Thursday, 2 pm – 3:20 pm CT
Abstract:  The secure optimization and control of large-scale electric power grids is a computational nightmare. Given the short timescales needed for decision making, the safety-critical nature of these decisions, and the sheer scale of the grid, the computational tools needed for safe and reliable power system operation are both highly advanced and presently lacking. As you may have guessed, Machine Learning (ML) is seen as a useful tool which will help overcome some of these challenges, potentially transforming grid operation. Given this context, this talk has three goals. First, we will review several challenges, emerging from both the academic research world and the real engineering world, associated with running the power grid. Second, given the increasing usage of ML to solve research-grade power grid problems, we will explore the rise of ML verification within the realm of electrical power systems; such verification has focused on constraint satisfaction, optimality guarantees, prediction accuracy, and more. Finally, we will explore the organic, yet under-investigated, mathematical connections between the secure/optimal operation of power systems and the scalable verification approaches emerging from the ML verification community. These connections allow us to formulate a number of interesting and useful problems which draw on insights from the both power system mathematical optimization community and the formal verification community.

Bio: Sam Chevalier is an Assistant Professor in the Department of Electrical and Biomedical Engineering at the University of Vermont (UVM), joining in the fall of 2023. His research endeavors include designing industry-relevant optimization and control strategies for renewable-based power grids, building trustworthy machine learning tools for safety-critical engineering applications, and developing advanced data-driven modeling techniques for the power and energy sectors. Prior to joining UVM, Sam was a Marie Skłodowska-Curie Postdoctoral Follow at the Technical University of Denmark, where his research focused on developing computational techniques for verifying the performance of Neural Network models of electrical power system operation. Sam received his PhD from the Mechanical Engineering department at the Massachusetts Institute of Technology (MIT) in 2021, and he received his BS/MS degrees from the EE department at UVM in 2015/2016.

ECE598HZ: Advanced Topics in Machine Learning and Formal Methods
Email: huanz@illinois.edu