Invited Student Talks

The 10th Annual CSL Student Conference — Organizing Committee is pleased to announce the following invited student talks.

Katie Driggs-Campbell

M.S./Ph.D. student, EE CS, UC Berkeley

Human Inspired Modeling for Autonomous Vehicles:Utilizing Sensors, Machine Learning, and Control

Venu: CSL B02

Time: 11:50 – 12:15, Feb 26 2015

Abstract

Recently, multiple car companies have announced that autonomous vehicles will be available to the public in the next few years.  While a great deal of progress has been made in autonomous systems, they often lack the flexibility and the realism that safe drivers exhibit.  To address this, there has been an increased focus on identifying human-inspired approaches for driver assistance and autonomous systems to predict or mimic human behavior.  We present driver modeling algorithms that identify likely driver behavior and identify intent for semi- and fully autonomous frameworks.  By integrating relevant sensor data, we analyze how humans interpret and interact with the dynamic environment.  Using the estimated observable states from the surrounding vehicles, we analyze various maneuvers (e.g. lane changes), and risk measures, such as time-to-collision.  These models use various machine learning techniques to learn the driver behavior, utilizing data collected from a motion simulator that can gather data in a safe, realistic manner while allowing for real-time, active labeling and assessment.  The developed models have been shown to identify behaviors with extremely high accuracy.  Utilizing a hybrid system formulation, the resulting system can be minimally invasive and capture the flexibility and adaptability of adept drivers.

Speaker Bio

Katie Driggs-Campbell was born and raised in Phoenix, Arizona, and attended Arizona State University, graduating with a B.S.E. in Electrical Engineering with honors in 2012. Under the guidance of Professor Ruzena Bajcsy, her current research focuses on developing testbeds and control algorithms for robotic systems that safely interact with humans in everyday life. Specifically, she considers the interaction between drivers and autonomous vehicles by developing driver models and analyzing networks of heterogeneous vehicles.  Outside of work, she enjoys fun facts and being involved in the EE Graduate Student Association and Women in Computer Science and Electrical Engineering organization.

 

Christos Thrampoulidis

PhD student, Electrical Engineering at Caltech

Comparison lemmas and convexity: towards a precise performance analysis of non-smooth optimization

Venu: CSL B02

Time: 16:20 – 16:45, Feb 26 2015

Abstract

The typical scenario that arises in most big-data problems is one where the ambient dimension of the signal is very large (e.g. high resolution images, gene expression data from a DNA microarray, social network data), yet is such that its desired properties lie in some low-dimensional structure (sparsity, low-rankness, clusters). Non-smooth convex optimization procedures have emerged as a powerful tool to reveal those structures. We consider a general class of such methods which minimize a loss function measuring the misfit of the model to noisy, linear observations with an added “structured-inducing” regularization term (l1-norm, nuclear norm, mixed l1/l2-norm, etc.). Celebrated instances include the LASSO, Group-LASSO, Least-Absolute Deviations method, etc.. In this talk, we will describe a quite general theory for how to determine precise performance guaranties (minimum number of measurements, mean-square-error, etc.) of such methods for certain measurement ensembles (Gaussian, Haar, etc.). For illustration, we show results on the mean-squared-error of the LASSO algorithm and make connections to the ordinary least squares and to noiseless compressed sensing. The genesis of the framework can be traced back to a famous 1962 lemma of Slepian on comparing Gaussian processes, and, more precisely, on a non-trivial extension proved by Gordon in 1988 known as the Gaussian min-max theorem, to which we provide a stronger version in the presence of additional convexity assumptions.

Joint work with Samet Oymak and Babak Hassibi.

Speaker Bio

Christos Thrampoulidis was born in Veroia, Greece. He received his diploma in Electrical and Computer engineering from the University of Patras, Greece in 2011 and his Masters degree in Electrical Engineering from the California Institute of Technology, Pasadena in 2012. He is currently a PhD candidate at Caltech. His research interests include analysis of convex optimization algorithms, compressive sensing, statistical inference and optimization in the smart grid. Thrampoulidis is a recipient of the 2014 Qualcomm Innovation Fellowship. He has also been awarded the Andreas Mentzelopoulos Scholarship and I. Milias award from the University of Patras.

Yan Michalevsky

PhD Student, Stanford University

Side-Channel Attacks on Mobile Devices

Venu: CSL B02
Time: 16:20 – 16:45, Feb 27 2015

Abstract

Modern smartphones are loaded with sensors that measure a lot of information about the environment: a compass, an accelerometer, a GPS receiver, a microphone, an ampere-meter, etc. Some sensors, like the GPS receiver and microphone, are protected, as applications must request special permissions to read data from them. Other sensors, like the accelerometer and ampere-meter, are considered innocuous and can be read by any application without special permissions.

In a sequence of recent papers, we show that smartphone sensors can be abused: malicious applications can use innocuous sensors for unintended purposes. We give three illustrative examples: access to the accelerometer results in a device fingerprint that is strongly bound to the phone, access to the gyro sensor enables an application without privileges to eavesdrop on acoustic signals, including speech, in the vicinity of the phone. Access to the ampere-meter reveals information about the phone’s past and present locations.

We suggest defenses specific to these particular attacks, as well as more general principles for designing a more secure ecosystem of smart devices.

Speaker Bio

Yan is a PhD student at Stanford University, advised by Dan Boneh. He recent focus is on mobile security and privacy. His work on side-channel attacks on mobile devices has been presented at Usenix and BlackHat security conferences. Previously, he held several positions in industry as a team manager, independent contractor, and software architect and developer, mostly in the fields of networks, embedded software and security. He holds a BSc in Electrical Engineering from the Technion, and an MS in Electrical Engineering from Stanford University.

Dr. Michael Tschantz

Researcher,
International Computer Science Institute, Berkeley

Automated Experiments on Ad Privacy Settings: A Tale of Opacity, Choice, and Discrimination

Venu: CSL B02
Time: 16:45 – 17:10, Feb 27 2015

Abstract

To partly address people’s concerns over web tracking, Google has created the Ad Settings webpage to provide information about and some choice over the profiles Google creates on users. We present AdFisher, an automated tool that explores how user behaviors, Google’s ads, and Ad Settings interact. AdFisher can run browser-based experiments and analyze data using machine learning and significance tests. Our tool uses a rigorous experimental design and statistical analysis to ensure the statistical soundness of our results. We use AdFisher to find that the Ad Settings was opaque about some features of a user’s profile, that it does provide some choice on ads, and that these choices can lead to seemingly discriminatory ads. In particular, we found that visiting webpages associated with substance abuse changed the ads shown but not the settings page. We also found that setting the gender to female resulted in getting fewer instances of an ad related to high paying jobs than setting it to male. Our limited visibility into the ad ecosystem prevents us from assigning blame, but these results can form the basis for investigations by the companies themselves or by regulatory bodies.

This is joint work with Amit Datta and Anupam Datta.

Speaker Bio

Michael Carl Tschantz is a researcher at the International Computer Science Institute.  He uses the models of artificial intelligence and statistics to solve the problems of privacy and security.  His current research includes automating information flow experiments, circumventing censorship, and securing machine learning.  He has a Ph.D. in Computer Science from Carnegie Mellon University.  His dissertation formalized and operationalized what it means to use information for a purpose.