ITI Joint Trust and Security/Science of Security Seminars Fall 2015

  • Posted on January 19, 2016 at 1:38 pm by whitesel@illinois.edu.
  • Categorized Events.
  • Comments are off for this post.

brighten-godfreyA Hypothesis Testing Framework for Network Security  Slides | Video
Brighten Godfrey, Associate Professor, Computer Science, University of Illinois at Urbana-Champaign
September 15, 2015, 4:00 p.m., 2405 Siebel Center

Abstract: We rely on network infrastructure to deliver critical services and ensure security. Yet networks today have reached a level of complexity that is far beyond our ability to have confidence in their correct behavior – resulting in significant time investment and security vulnerabilities that can cost millions of dollars, or worse. Motivated by this need for rigorous understanding of complex networks, I will give an overview of our or Science of Security lablet project, A Hypothesis Testing Framework for Network Security.

First, I will discuss the emerging field of network verification, which transforms network security by rigorously checking that intended behavior is correctly realized across the live running network. Our research developed a technique called data plane verification, which has discovered problems in operational environments and can verify hypotheses and security policies with millisecond-level latency in dynamic networks. In just a few years, data plane verification has moved from early research prototypes to production deployment. We have built on this technique to reason about hypotheses even under the temporal uncertainty inherent in a large distributed network. Second, I will discuss a new approach to reasoning about networks as databases that we can query to determine answers to behavioral questions and to actively control the network. This talk will span work by a large group of folks, including Anduo Wang, Wenxu an Zhou, Dong Jin, Jason Croft, Matthew Caesar, Ahmed Khurshid, and Xuan Zou.

Eric Badger PhotoScalable Data Analytics Pipeline for Real-Time Attack Detection; Design, Validation, and Deployment in a Honey Pot Environment  Slides | Video
Eric Badger, Research Assistant, Electrical and Computer Engineering, University of Illinois at Urbana-Champaign
October 6, 2015, 4:00 p.m., 2405 Siebel Center

Abstract: This talk will explore a scalable data analytics pipeline for real-time attack detection through the use of customized honeypots at the National Center for Supercomputing Applications (NCSA). Attack detection tools are common and are constantly improving, but validating these tools is challenging. You must: (i) identify data (e.g., system-level events) that is essential for detecting attacks, (ii) extract this data from multiple data logs collected by runtime monitors, and (iii) present the data to the attack detection tools. On top of this, such an approach must scale with an ever-increasing amount of data, while allowing integration of new monitors and attack detection tools. All of these require an infrastructure to host and validate the developed tools before deployment into a production environment.

We will present a generalized architecture that aims for a real-time, scalable, and extensible pipeline that can be deployed in diverse infrastructures to validate arbitrary attack detection tools. To motivate our approach, we will show an example deployment of our pipeline based on open-sourced tools. The example deployment uses as its data sources: (i) a customized honeypot environment at NCSA and (ii) a container-based testbed infrastructure for interactive attack replay. Each of these data sources is equipped with network and host-based monitoring tools such as Bro (a network-based intrusion detection system) and OSSEC (a host-based intrusion detection system) to allow for the runtime collection of data on system/user behavior. Finally, we will present an attack detection tool that we developed and that we look to validate through our pipeline. In conclusion, the talk will discuss the challenges of transitioning attack detection from theory to practice and how the proposed data analytics pipeline can help that transition.

Accounting for User Behavior in Predictive Cyber Security Models  Slides | Video
Mohammad Noureddine, Research Assistant, Computer Science, University of Illinois at Urbana-Champaign
October 20, 2015, 4:00 p.m., 2405 Siebel Center

Abstract: The human factor is often regarded as the weakest link in cybersecurity systems. The investigation of several security breaches reveals an important impact of human errors in exhibiting security vulnerabilities. Although security researchers have long observed the impact of human behavior, few improvements have been made in designing secure systems that are resilient to the uncertainties of the human element.

In this talk, we discuss several psychological theories that attempt to understand and influence the human behavior in the cyber world. Our goal is to use such theories in order to build predictive cyber security models that include the behavior of typical users, as well as system administrators. We then illustrate the importance of our approach by presenting a case study that incorporates models of human users. We analyze our preliminary results and discuss their challenges and our approaches to address them in the future.

Zhenqi Huang PhotoYuWangSMT-Based Controller Synthesis for Linear Dynamical Systems with Adversary  Slides | Video
Zhenqi Huang, Research Assistant, Electrical and Computer Engineering, University of Illinois at Urbana-Champaign
Yu Wang, Research Assistant, Mechanical Science and Engineering, University of Illinois at Urbana-Champaign
November 3, 2015, 4:00 p.m., 2405 Siebel Center

Abstract: We present a controller synthesis algorithm for a discrete time reach-avoid problem in the presence of adversaries. Our model of the adversary captures typical malicious attacks envisioned on cyber-physical systems such as sensor spoofing, controller corruption, and actuator intrusion. After formulating the problem in a general setting, we present a sound and complete algorithm for the case with linear dynamics and an adversary with a budget on the total L2-norm of its actions. The algorithm relies on a result from linear control theory that enables us to decompose and precisely compute the reachable states of the system in terms of a symbolic simulation of the adversary-free dynamics and the total uncertainty induced by the adversary. We provide constraint-based synthesis algorithms for synthesizing open-loop and a closed-loop controllers using SMT solvers.

ITI Joint Trust and Security/Science of Security Seminars Spring 2016

  • Posted on September 16, 2015 at 11:16 am by whitesel@illinois.edu.
  • Categorized Events.
  • Comments are off for this post.

Tao Xie PhotoUser Expectations in Mobile App Security  slides | video
Tao Xie, Associate Professor, Computer Science, University of Illinois at Urbana-Champaign
January 26, 2016, 4:00 p.m., B02 Coordinated Science Lab

Abstract: Maintaining the security and privacy hygiene of mobile apps is a critical challenge. Unfortunately, no program analysis algorithm can determine that an application is “secure” or “malware-free.” For example, if an application records audio during a phone call, it may be malware. However, the user may want to use such an application to record phone calls for archival and benign purposes. A key challenge for automated program analysis tools is determining whether or not that behavior is actually desired by the user (i.e., user expectation). This talk presents recent research progress in exploring user expectations in mobile app security.

winglam-pictureTowards Preserving Mobile Users’ Privacy in the Context of Utility Apps
Wing Lam, Research Assistant, Computer Science, University of Illinois at Urbana-Champaign
March 1, 2016, 4:00 p.m., B02 Coordinated Science Lab

Abstract: A variety of valuable mobile utility apps heavily rely on collecting a user’s app usage data to carry out their promised utilities and enhance user experiences. A part of such app usage data often contains security-sensitive information. Thus, an important and challenging issue arises: how to balance between the user’s privacy and the utility app’s utility functionality. Towards addressing the issue, we propose a new privacy framework that combines techniques of runtime sensitive-information detection, utility-impact analysis, privacy-policy compliance checking, and balanced data anonymization to enable a third-party app to determine what original values to keep in sanitized data in order to deliver a desirable level of utility efficacy.

Zhenqi Huang PhotoYuWangDifferential Privacy, Entropy and Security in Distributed Control of Cyber Physical Systems  slides | video
Zhenqi Huang, Research Assistant, Electrical and Computer Engineering, University of Illinois at Urbana-Champaign
Yu Wang, Research Assistant, Mechanical Science and Engineering, University of Illinois at Urbana-Champaign
April 26, 2016, 4:00 p.m., B02 Coordinated Science Lab

Abstract: The concept of differential privacy stems from the study of private query of datasets. In this work, we apply this concept to discrete-time, linear distributed control systems in which agents need to maintain privacy of certain preferences, while sharing information for better system-level performance. The system has N agents operating in a shared environment that couples their dynamics. We show that for stable systems the performance grows as O(T3/Nε2), where T is the time horizon and ε is the differential privacy parameter. Next, we study lower-bounds in terms of the Shannon entropy of the minimal mean square estimate of the system’s private initial state from noisy communications between an agent and the server. We show that for any of noise-adding differentially private mechanism, then the Shannon entropy is at least nN(1−ln(ε/2)), where n is the dimension of the system, and t he lower bound is achieved by a Laplace-noise-adding mechanism. Finally, we study the problem of keeping the objective functions of individual agents differentially private in the context of cloud-based distributed optimization. The result shows a trade-off between the privacy of objective functions and the performance of the distributed optimization algorithm with noise.

phuong caoPreemptive Intrusion Detection – Practical Experience and Detection Framework  slides | video
Phuong Cao, Research Assistant, Electrical and Computer Engineering, University of Illinois at Urbana-Champaign
May 3, 2016, 4:00 p.m., B02 Coordinated Science Lab

Abstract: Using stolen or weak credentials to bypass authentication is one of the top 10 network threats, as shown in recent studies. Disguising as legitimate users, attackers use stealthy techniques such as rootkits and covert channels to gain persistent access to a target system. However, such attacks are often detected after the system misuse stage, i.e., the attackers have already executed attack payloads such as: i) stealing secrets, ii) tampering with system services, and ii) disrupting the availability of production services.

In this talk, we analyze a real-world credential stealing attack observed at the National Center for Supercomputing Applications. We show the disadvantages of traditional detection techniques such as signature-based and anomaly-based detection for such attacks. Our approach is a complement to existing detection techniques. We investigate the use of Probabilistic Graphical Model, specifically Factor Graphs, to integrate security logs from multiple sources for a more accurate detection. Finally, we propose a security testbed architecture to: i) simulate variants of known attacks that may happen in the future, ii) replay such attack variants in an isolated environment, and iii) collect and share security logs of such replays for the security research community.