- Posted on September 16, 2015 at 11:16 am by email@example.com.
- Categorized Events.
User Expectations in Mobile App Security slides | video
Tao Xie, Associate Professor, Computer Science, University of Illinois at Urbana-Champaign
January 26, 2016, 4:00 p.m., B02 Coordinated Science Lab
Abstract: Maintaining the security and privacy hygiene of mobile apps is a critical challenge. Unfortunately, no program analysis algorithm can determine that an application is “secure” or “malware-free.” For example, if an application records audio during a phone call, it may be malware. However, the user may want to use such an application to record phone calls for archival and benign purposes. A key challenge for automated program analysis tools is determining whether or not that behavior is actually desired by the user (i.e., user expectation). This talk presents recent research progress in exploring user expectations in mobile app security.
Towards Preserving Mobile Users’ Privacy in the Context of Utility Apps
Wing Lam, Research Assistant, Computer Science, University of Illinois at Urbana-Champaign
March 1, 2016, 4:00 p.m., B02 Coordinated Science Lab
Abstract: A variety of valuable mobile utility apps heavily rely on collecting a user’s app usage data to carry out their promised utilities and enhance user experiences. A part of such app usage data often contains security-sensitive information. Thus, an important and challenging issue arises: how to balance between the user’s privacy and the utility app’s utility functionality. Towards addressing the issue, we propose a new privacy framework that combines techniques of runtime sensitive-information detection, utility-impact analysis, privacy-policy compliance checking, and balanced data anonymization to enable a third-party app to determine what original values to keep in sanitized data in order to deliver a desirable level of utility efficacy.
Differential Privacy, Entropy and Security in Distributed Control of Cyber Physical Systems slides | video
Zhenqi Huang, Research Assistant, Electrical and Computer Engineering, University of Illinois at Urbana-Champaign
Yu Wang, Research Assistant, Mechanical Science and Engineering, University of Illinois at Urbana-Champaign
April 26, 2016, 4:00 p.m., B02 Coordinated Science Lab
Abstract: The concept of differential privacy stems from the study of private query of datasets. In this work, we apply this concept to discrete-time, linear distributed control systems in which agents need to maintain privacy of certain preferences, while sharing information for better system-level performance. The system has N agents operating in a shared environment that couples their dynamics. We show that for stable systems the performance grows as O(T3/Nε2), where T is the time horizon and ε is the differential privacy parameter. Next, we study lower-bounds in terms of the Shannon entropy of the minimal mean square estimate of the system’s private initial state from noisy communications between an agent and the server. We show that for any of noise-adding differentially private mechanism, then the Shannon entropy is at least nN(1−ln(ε/2)), where n is the dimension of the system, and t he lower bound is achieved by a Laplace-noise-adding mechanism. Finally, we study the problem of keeping the objective functions of individual agents differentially private in the context of cloud-based distributed optimization. The result shows a trade-off between the privacy of objective functions and the performance of the distributed optimization algorithm with noise.
Preemptive Intrusion Detection – Practical Experience and Detection Framework slides | video
Phuong Cao, Research Assistant, Electrical and Computer Engineering, University of Illinois at Urbana-Champaign
May 3, 2016, 4:00 p.m., B02 Coordinated Science Lab
Abstract: Using stolen or weak credentials to bypass authentication is one of the top 10 network threats, as shown in recent studies. Disguising as legitimate users, attackers use stealthy techniques such as rootkits and covert channels to gain persistent access to a target system. However, such attacks are often detected after the system misuse stage, i.e., the attackers have already executed attack payloads such as: i) stealing secrets, ii) tampering with system services, and ii) disrupting the availability of production services.
In this talk, we analyze a real-world credential stealing attack observed at the National Center for Supercomputing Applications. We show the disadvantages of traditional detection techniques such as signature-based and anomaly-based detection for such attacks. Our approach is a complement to existing detection techniques. We investigate the use of Probabilistic Graphical Model, specifically Factor Graphs, to integrate security logs from multiple sources for a more accurate detection. Finally, we propose a security testbed architecture to: i) simulate variants of known attacks that may happen in the future, ii) replay such attack variants in an isolated environment, and iii) collect and share security logs of such replays for the security research community.