Projects – In Progress
An Automated Synthesis Framework for Network Security and Resilience Analysis
Matt Caesar and Dong (Kevin) Jin
We propose to develop the analysis methodology needed to support scientific reasoning about the resilience and security of networks, with a particular focus on network control and information/data flow. The core of this vision is an automated synthesis framework (ASF), which will automatically
derive network state and repairs from a set of specified correctness requirements and security policies. ASF consists of a set of techniques for performing and integrating security and resilience analyses applied at different layers (i.e., data forwarding, network control, programming language, and application software) in a real-time and automated fashion. The ASF approach is exciting because developing it adds to the theoretical underpinnings of SoS, while using it supports the practice of SoS.
A Human-Agent-Focused Approach to Security Modeling
William Sanders
Although human users can greatly affect the security of systems intended to be resilient, we lack a detailed understanding of their motivations, decisions, and actions. The broad aim of this project is to provide a scientific basis and techniques for cybersecurity risk assessment. This is achieved through development of a general-purpose modeling and simulation approach for cybersecurity aspects of cyber-systems and of all human agents that interact with those systems. These agents include adversaries, defenders, and users. The ultimate goal is to generate quantitative metric results that will help system architects make better design decisions to achieve system resiliency. Prior work on modeling enterprise systems and their adversaries has shown the promise of such modeling abstractions and the feasibility of using them to study the behavior under cyber attack of a large class of systems. Our hypothesis is that to incorporate all human agents who interact with a system will create more realistic simulations and produce insights regarding fundamental questions about how to lower cybersecurity risk. System architects can leverage the results to build more resilient systems that are able to achieve their mission objectives despite attacks.
A Monitoring Fusion and Response Framework to Provide Cyber Resiliency
William Sanders
We believe that diversity and redundancy can help us prevent an attacker from hiding all of his or her traces. Therefore, we will strategically deploy diverse security monitors and build a set of techniques to combine information originating at the monitors. We have shown that we can formulate monitor deployment as a constrained optimization problem wherein the objective function is the utility of monitors in detecting intrusions. In this project, we will develop methods to select and place diverse monitors at different architectural levels in the system and evaluate the trustworthiness of the data generated by the monitors. We will build event aggregation and correlation algorithms to achieve inferences for intrusion detection. Those algorithms will combine the events and alerts generated by the deployed monitors with important system-related information, including information on the system architecture, users, and vulnerabilities. Since the rule-based detection systems fail to detect novel attacks, we will adapt and extend existing anomaly detection methods. We will build on our previous SoS-funded work that resulted in the development of the special-purpose intrusion detection methods.
Resilient Control of Cyber-Physical Systems with Distributed Learning
Sayan Mitra, Geir Dullerud, and Sanjay Shakkotai
Critical cyber and cyber-physical systems (CPS) are beginning to use predictive AI models. These models help to expand, customize, and optimize the capabilities of the systems, but are also vulnerable to a new and imminent class of attacks. This project will develop foundations and methodologies to make such systems resilient. Our focus is on control systems that utilize large-scale, crowd-sourced data collection to train predictive AI models, which are then used to control and optimize the system’s performance. Consider the examples of congestion-aware traffic routing and autonomous vehicles; to design controllers for such systems, large amounts of user data are being collected to train AI models that predict network congestion dynamics and human driving behaviors, respectively, and these models are used to guide the overall closed-loop control system.
Uncertainty in Security Analysis
David Nicol
Cyber-physical system (CPS) security lapses may lead to catastrophic failure. We are interested in the scientific basis for discovering unique CPS security vulnerabilities to dynamics-aware attacks that alter behaviors of components in ways that lead to instability, unsafe behavior, and ultimately diminished availability. Our project advances this scientific basis through security-metrics-driven design and evaluation of CPS, based on formalization of adversary classes and security metrics. We propose to define metrics, and then develop and study static and dynamic analysis algorithms that provide formal guarantees on them with respect to different adversary classes.