Illinois Students
The Best Student Talk Awards are given to Illinois students Cesar A. Uribe, Jia-Bin Huang, and Arjun Athreya!
Varun Badrinath Krishna
Graduate Student, CSL, ECE
Data-driven Anomaly Detection for Securing Smart Meter Communications in Power Grids
Venue: CSL B02
Time: 11:00-11:20, Feb 18th, Thursday
Abstract
Electricity theft is a billion-dollar problem faced by electric utilities around the world, and current measures are ineffective against cyber theft attacks that compromise the integrity of smart meter communications. My research with Prof William H. Sanders is aimed at detecting such attacks using data-driven anomaly detection approaches that leverage concepts in machine learning and signal processing. I have evaluated two models to capture trends in electricity consumption using a real dataset of 500 consumer smart meters. These trends were used in identifying anomalies from legitimate consumption patterns. The first model uses PCA to remove noise from the data and DBSCAN to identify the anomalies. The second is the ARIMA model, which produces a confidence interval for future readings that can be used to identify anomalous readings. The talk will describe these two approaches, both of which led to publications that won best paper awards at their respective conferences.
Speaker Bio
Varun is a graduate student in the Electrical and Computer Engineering department and a research assistant in the Information Trust Institute, University of Illinois at Urbana-Champaign. With Prof William H. Sanders, Varun is researching data-driven methods to secure communications in smart grids. He is Co-PI on that project, partially supported by the Siebel Energy Institute, and leveraging the Blue Waters supercomputer at NCSA. His papers won best paper awards at QEST’15 and CRITIS’15. Varun graduated from the National University of Singapore and worked in Singapore as a Research Engineer for three years prior to joining the University of Illinois.
Cesar A. Uribe
Ph.D. candidate, CSL, ECE (Best Talk Award)
Fast Rates and Network Independence in Distributed Learning
Venue: CSL B02
Time: 11:20-11:40, Feb 18th, Thursday
Abstract
We consider the problem of distributed learning where a group of agents repeatedly observe some random processes and try to collectively agree on a hypothesis that best explains all the observations in the network. Agents are allowed to interact in a time-varying directed sequence of graphs. We propose a distributed learning rule and establish a nonasymptotic, explicit, geometric and network independent convergence rate. Additionally, in the case of fixed undirected graphs, we provide an improved learning protocol which has better scalability with the number of nodes in the network.
Speaker Bio
César A. Uribe is a PhD student in CSL under the supervision of Prof. Angelia Nedich and Prof. Alex Olshevsky. He received an Engineer diploma (with Honors) in Electronic Engineering from the University of Antioquia-Colombia in 2010 and a M.Sc. in Systems and Control (Cum Laude) from the Delft University of Technology- Netherlands in 2013. His main research interest are study of distributed optimization algorithms for learning and control.
Giulia Cecilia Fanti
Postdoc, CSL, ECE
Anonymous Message Spreading in the Presence of Spies
Venue: CSL B02
Time: 11:40-12:00, Feb 18th, Thursday
Abstract
Anonymous messaging platforms like Whisper and Yik Yak allow users to spread messages over a network (e.g., a social network) without revealing the message author to other users. In these platforms, content is spread symmetrically over the contact network. This so-called “diffusion spreading” leads to author deanonymization by adversaries with access to metadata, such as timing information. In this work, we ask how to spread a message so that an adversary with metadata access cannot infer the source. In particular, we prove that a recently-proposed spreading mechanism called adaptive diffusion achieves asymptotically optimal source-hiding against such an adversary, and significantly outperforms traditional diffusion. This is surprising because adaptive diffusion was designed for a different adversarial model. We analytically characterize the anonymity properties of adaptive diffusion over trees and demonstrate empirically that these properties hold over real social graphs.
Speaker Bio
Giulia Fanti is a postdoctoral researcher at the University of Illinois at Urbana-Champaign, studying privacy-preserving technologies under Professor Pramod Viswanath. She previously obtained her Ph.D. and M.S. in EECS from U.C. Berkeley under Professor Kannan Ramchandran, and her B.S. in ECE from Olin College of Engineering in 2010. She is a recipient of the National Science Foundation Graduate Research Fellowship, as well as a Best Paper Award at ACM Sigmetrics 2015 for her work on anonymous rumor spreading, in collaboration with Peter Kairouz, Professor Sewoong Oh and Professor Pramod Viswanath of the University of Illinois at Urbana-Champaign.
Aadeel Akhtar
Ph.D candidate, Neuroscience Program
The Future of Upper Limb Prosthetics
Venue: CSL B02
Time: 14:30-14:50, Feb 18th, Thursday
Abstract
According to the WHO, roughly 80% of amputees live in low-income countries, while less than 3% of that population has access to appropriate rehabilitative care. In the first part of the talk, we will discuss our efforts in developing a highly-functional, low-cost, 3D-printed hand controlled by residual muscles for patients with transradial amputations, and our work with the Range of Motion Project in testing our device in Ecuador. The second part of the talk will focus on progress and challenges in incorporating proprioceptive and tactile sensory feedback into prosthetic hands.
Speaker Bio
Aadeel is an M.D./Ph.D. candidate in the Neuroscience program at the University of Illinois at Urbana-Champaign. He is a member of the Bretl Research Group and currently holds an NIH National Research Service Award MD/PhD Fellowship. Aadeel received his B.S in Biology in 2007 and M.S. in Computer Science in 2008 at Loyola University Chicago. His research interests include motor control and sensory feedback for upper limb prosthetic devices, and he has established collaborations with the Rehabilitation Institute of Chicago, the John Rogers Research Group at Illinois, and the Range of Motion Project in Guatemala and Ecuador. He is also the Co-Founder and CEO of PSYONIC, a startup whose mission is to develop advanced, neurally-controlled prosthetic hands—the first with sensory feedback—at a tenth the cost of state-of-the-art commercially available prostheses, for those who need them around the world.
Shripad Gade
Ph.D candidate, Aerospace
Robotic Herding of Bird Flocks Using UAVs
Venue: CSL B02
Time: 15:20-15:40, Feb 18th, Thursday
Abstract
In this work we present a strategy for diverting a flock of birds, away from an airport, using a robotic adversary, referred to as a pursuer (like robotic falcon). The objectives of this work are to prevent the birds from entering a specified volume of space, and to push the flock towards a desired herding goal. We propose n-Wavefront algorithm, for enabling a single UAV to herd a flock of birds to a desired area. We analyze the performance and stability characteristics of the herding strategy using tools from linear and nonlinear stability theory, with the aim of proving its performance and identifying the permissible and optimum values of the control parameters. It is shown via simulation and theory that, using the n-Wavefront algorithm, a pursuer can successfully maneuver the birds around the prescribed perimeter while ensuring that the swarm does not undergo fragmentation in response to the presence of the pursuer.
Speaker Bio
Shripad Gade graduated with a B.Tech and a M.Tech in Aerospace Engineering from Indian Institute of Technology Bombay, India. He is currently a graduate student in the Aerospace Engineering department at the University of Illinois at Urbana-Champaign. His research interests include dynamics and control of multi-agent systems, distributed optimization and networked control.
Alireza Ramezani
Postdoc, CSL, Aerospace
Bat Bot (B2), A Biologically Inspired Flying Machine
Venue: CSL B02
Time: 15:40-16:00, Feb 18th, Thursday
Abstract
It is challenging to analyze the aerial locomotion of bats because of the complicated and intricate relationship between their morphology and flight capabilities. Developing a biologically inspired bat robot would yield insight into how bats control their body attitude and position through the complex interaction of nonlinear forces (e.g., aerodynamic) and their intricate musculoskeletal mechanism. The current work introduces a biologically inspired soft robot called Bat Bot (B2). The overall system is a flapping machine with 5 Degrees of Actuation (DoA). B2 has a nontrivial morphology and it has been designed after examining several biological bats. Key DoAs, which contribute significantly to bat flight, are picked and incorporated in B2’s flight mechanism design. These DoAs are: 1) forelimb flapping motion, 2) forelimb mediolateral motion (folding and unfolding) and 3) hindlimb dorsoventral motion (upward and downward movement).
Speaker Bio
Alireza started his undergraduate study in 2002 in a mechanical engineering program at Iran University of Science and Technology (IUST), Tehran-Iran, and pursued his studies, in 2006, at Swiss Federal Institute of Technology, Zurich-Switzerland, with an ETH diploma degree in mechanical engineering. Later, in 2010, Alireza joined Mechanical Engineering Department at University of Michigan in Ann Arbor as a Ph.D. student with Prof. Jessy Grizzle as of his academic adviser. In 2014, Alireza joined CSL at UIUC and in collaboration with Prof. Seth Hutchinson and prof. Soon-Jo Chung he is developing soft bio-inspired robots with bat morphology.
Jia-Bin Huang
Ph.D candidate, ECE (Best Talk Award)
Single Image Super-Resolution from Transformed Self-Exemplars
Venue: CSL B02
Time: 11:00-11:20, Feb 19th, Friday
Abstract
Image super-resolution (SR) aims to recover the missing high spatial frequency details given its low-resolution observation. Most modern single image super-resolution methods rely on machine learning techniques for learning the relationship between low-resolution (LR) and high-resolution (HR) image patches. A popular class of such algorithms uses an external database of natural images as a source of LR-HR training patch pairs. However, large training sets are required to learning a sufficiently expressive LR-HR dictionary. The performance is often limited due to the complexity of the patch space. In this talk, I will present a self-similarity based SR to overcome this drawback. The core idea is to expand the internal patch search space for better accommodating local shape variations. I will show that even without using any external training databases, our method achieves significantly superior results on urban scenes while maintaining comparable performance on natural scenes as other state-of-the-art SR algorithms.
Speaker Bio
Jia-Bin Huang is a Ph.D. candidate in the Department of Electrical and Computer Engineering at University of Illinois, Urbana-Champaign advised by Prof. Narendra Ahuja. His research interests include computer vision, computer graphics, and machine learning with a focus on visual analysis and synthesis with physically grounded constraints. His research received the Best Student Paper Award in International Conference on Pattern Recognition in 2012 for the work on computational modeling of visual saliency and the Best Paper Award in ACM Symposium on Eye Tracking Research and Applications in 2014 for work on learning-based eye tracking. Huang is the recipient of the UIUC Dissertation Completion Fellowship, Thomas and Margaret Huang Award, Sundaram Seshu Fellowship, and PURE Best Research Mentor Award.
Pooya Khorrami
Ph.D candidate, CSL, ECE
Do Deep Neural Networks Learn Facial Action Units When Doing Expression Recognition?
Venue: CSL B02
Time: 11:20-11:40, Feb 19th, Friday
Abstract
Despite being the appearance-based classifier of choice in recent years, relatively few works have examined how much convolutional neural networks (CNNs) can improve performance on accepted expression recognition benchmarks and, more importantly, examine what it is they actually learn. In this work, not only do we show that CNNs can achieve strong performance, but we also introduce an approach to decipher which portions of the face influence the CNN’s predictions. First, we train a zero-bias CNN on facial expression data and achieve, to our knowledge, state-of-the-art performance on two expression recognition benchmarks: the extended Cohn-Kanade (CK+) dataset and the Toronto Face Dataset (TFD). We then qualitatively analyze the network by visualizing the spatial patterns that maximally excite different neurons in the convolutional layers and show how they resemble Facial Action Units (FAUs). Finally, we use the FAU labels provided in the CK+ dataset to verify that the FAUs observed in our filter visualizations indeed align with the subject’s facial movements.
Speaker Bio
Pooya Khorrami is pursuing his Ph.D. in Electrical and Computer Engineering (ECE) under Professor Thomas S. Huang in the Image Formation and Processing (IFP) group at the University of Illinois at Urbana-Champaign. He received his B.S. in Electrical and Computer Engineering from Carnegie Mellon University in 2011 and his M.S. in Electrical and Computer Engineering from the University of Illinois at Urbana-Champaign in 2013. He is a recipient of the James N. Henderson Fellowship awarded by the ECE department in 2012. His research interests include facial expression recognition, deep learning, and video surveillance.
Minje Kim
Ph.D candidate, Computer Science
Bitwise Neural Networks for Source Separation
Venue: CSL B02
Time: 11:40-12:00, Feb 19th, Friday
Abstract
In the proposed Bitwise Neural Networks (BNN), all the input, hidden, and output nodes are binaries (+1 and -1), and so are all the weights and bias. BNNs are spatially and computationally efficient in implementations, since (a) we represent a real-valued sample or parameter with a bit (b) the multiplication and addition correspond to bitwise XNOR and bit-counting, respectively. Therefore, BNNs can be used to implement a deep learning system in a resource-constrained environment, so that we can deploy a deep learning system on small devices without using up the power, memory, CPU clocks, etc. The training procedure for BNNs is based on a straightforward extension of backpropagation. BNNs show comparable classification accuracies for the MNIST handwritten digit recognition task. Also, I show that a bitwise denoising autoencoder can be trained to produce a cleaned-up speech spectrum from an input noisy speech spectrum.
Speaker Bio
Minje Kim is a Ph.D candidate in the Department of Computer Science at UIUC. His research focuses on developing machine learning algorithms applied to audio processing, stressing out the computational efficiency in the resource-constrained environments or in the applications involving large unorganized datasets. He received Richard T. Cheng Endowed Fellowship from UIUC in 2011. Google and Starkey grants also honored his ICASSP papers as the outstanding student papers in 2013 and 2014, respectively. During his PhD study, he interned at Creative Technologies Lab in Adobe four times from 2012 to 2015. He worked as a researcher in ETRI, a national lab in Korea, from 2006 to 2011.
Doris Xin
Ph.D candidate, Computer Science
A Multi-Armed Bandit Approach for Batch Mode Active Learning on Information Networks
Venue: CSL B02
Time: 12:00-12:20, Feb 19th, Friday
Abstract
We propose an adaptable batch mode active learning algorithm for classification on information networks. The algorithm takes advantage of the type information in heterogeneous information networks to generalize to a wide range of tasks. A correspondence between active learning (AL) and the multi-armed bandit (MAB) problem is established in order to enable the application of a combinatorial MAB algorithm for near-optimal query batch selection. The algorithm combines simple AL strategies based on centrality indices using a novel measure for expected error reduction on information networks. We demonstrate the effectiveness and adaptability of the algorithm by evaluating its performance on different classification tasks over the same network for a few real-world information networks.
Speaker Bio
Doris Xin is currently a 2nd year PhD student in the Computer Science Department at UIUC advised by Professor Jiawei Han. She received her B.S. in Computer Science from the California Institute of Technology in 2012. She worked on ad response prediction and scalable machine learning infrastructure as a software engineer at LinkedIn from 2012 to 2014. She is a contributor to the Apache Spark project. Her current research interests are active learning, precision medicine via machine learning techniques, and declarative frameworks for learning and inference.
Yang Zhang
Ph.D candidate, ECE
Probabilistic Acoustic Tube Model: A Probabilistic Generative Model for Speech
Venue: CSL B02
Time: 15:30-15:50, Feb 19th, Friday
Abstract
Speech modeling has a wide range of applications in speech processing. Current speech models are either partial or unstructured. Probabilistic Acoustic Tube (PAT) is a probabilistic generative model for speech which has a potential advantage in many speech processing tasks. The model of PAT is based on the source-filter model of speech as well as other speech coding theories. The inference of PAT is done by Auxiliary Particle Filtering (APF) and Markov Chain Monte Carlo (MCMC). Experiments show the versatility and accuracy of the PAT model in decomposing speech into meaningful components.
Speaker Bio
Yang Zhang obtained a Bachelor’s degree from the Department of Electronic Engineering, Tsinghua University, China, and is currently a 4th year graduate student from ECE, UIUC. His research interests include modeling the human speech system, which involves the acoustic system and prosody. He is also interested in probabilistic models and Bayesian framework with applications in signal processing and data mining.
Zhicheng Yan
Ph.D candidate, Computer Science
HD-CNN: Hierarchical Deep Convolutional Neural Networks for Large Scale Visual Recognition
Venue: CSL B02
Time: 15:50-16:10, Feb 19th, Friday
Abstract
In image classification, visual separability between different object categories is highly uneven, and some categories are more difficult to distinguish than others. Such difficult categories demand more dedicated classifiers. However, existing deep convolutional neural networks (CNN) are trained as flat N-way classifiers, and few efforts have been made to leverage the hierarchical structure of categories. In this paper, we introduce hierarchical deep CNNs (HD-CNNs) by embedding deep CNNs into a two-level category hierarchy. An HD-CNN separates easy classes using a coarse category classifier while distinguishing difficult classes using fine category classifiers. During HD-CNN training, component-wise pretraining is followed by global fine-tuning with a multinomial logistic loss regularized by a coarse category consistency term. In addition, conditional executions of fine category classifiers and layer parameter compression make HD-CNNs scalable for large-scale visual recognition. We achieve state-of-the-art results on both CIFAR100 and large-scale ImageNet 1000-class benchmark datasets. In our experiments, we build up three different two-level HD-CNNs, and they lower the top-1 error of the standard CNNs by 2.65%, 3.1%, and 1.1%.
Speaker Bio
Zhicheng Yan is a Ph.D candidate in the Department of Computer Science at University of Illinois at Urbana-Champaign. He is passionately interested in tailoring the architecture of deep neural networks to the needs of various intriguing problems in computer vision and graphics. He focuses on leveraging deep neural networks based models to facilitate the image understanding and manipulation. Before joining UIUC, he was a master student and a research assistant in State Key Lab of CAD&CG at Zhejiang University. He completed his Bachelor’s degree at Zhejiang University majoring in Software Engineering in July, 2007.
James Yifei Yang
Ph.D candidate, CSL, ECE
Distributed Content Collection and Rank Aggregation
Venue: CSL B02
Time: 16:20-16:40, Feb 19th, Friday
Abstract
Despite the substantial literature on recommendation systems, there have been few studies indistributed settings, where peers provide recommendations locally. Motivated by word of mouthtype of social behavior and the advantages of sharing resources, we analyze an online distributedrecommendation system with joint content collection and rank aggregation. In such a system,peers contact each other and exchange partial preference information about items, which wetake to be videos. With limited knowledge, peers use recommendation strategies to makedecisions and collect items that are available from the contacted peers. The goalis to maximize the rate at which peers collect their most preferred items.
Correlated preferences are modeled either as rankings generated by a Plackett-Luce (PL)ranking model with Zipf popularity distribution or as scores generated using anindependent crossover (IC) model. We establish a performance upperbound and use intuition provided by the bound to design recommendation strategies with a range of complexity.Among these, the direct recommendation strategy emerges as being particularlysimple and yet effective. In the context of the IC model and the direct recommendationstrategy, we identify the fluid limit asthe number of videos goes to infinity for a mean field limit derived for the number of peersgoing to infinity. Simulation results show that the limit analysis accurately predictsperformance, not only for the IC model with scores, but also for the PL model with rankings.
Speaker Bio
James Yang is a Ph.D. candidate in the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign supervised by Prof. Bruce Hajek. He received the Master’s degree from UIUC in 2013 supervised by Prof. Hajek and the Bachelor’s degree from the University of Waterloo in 2011, all in ECE. His research interests are in communication and networks, distributed algorithms, machine learning and recommendation systems.
Luke Wendt
Ph.D candidate, CSL, ECE
Global Optimal Nonlinear Adaptive Control through Domain Linearization
Venue: CSL B02
Time: 16:40-17:00, Feb 19th, Friday
Abstract
This talk will show how under-actuated under-measured multivariate nonlinear dynamic systems defined on a finite domain can be made into computationally equivalent high dimensional linear systems through a reinterpretation of the state. This transformation is fully generalizable and easily implemented in an automated way. By transforming any nonlinear system into its linear form, all the tools of optimal linear control theory and estimation can be applied to arbitrarily complex systems and reward functions. In particular, observable and unobservable subspaces can be easily isolated for analysis. The final solutions can then be expressed in computationally efficient, low dimensional, highly nonlinear form. This is a new approach from traditional methods like gain scheduling and feedback linearization, and it may even provide insight into the underlying biological processes that produces intelligence.
Speaker Bio
Luke received his B.S. in Physics and Electrical Engineering from Hope College. During this time he designed automated part inspection systems for Lakeshore Vision and Robotics and worked with NASA in the modeling, control, and construction of a reconfigurable tetrahedral rover. He is now pursuing a Ph.D. in Electrical and Computer Engineering at UIUC. He has had leadership roles with the NASA academies and FIRST Robotics, and has also taken on external research projects including contract work for Valve Software. He currently works with the Language Acquisition and Robotics Group at The Beckman Institute.
Arjun Prasanna Athreya
Ph.D Candidate, CSL, ECE (Best Talk Award)
Modeling the Impact of Epigenetics in Lung Cancer: A Game Theoretic Approach
Venue: CSL B02
Time: 17:00-17:20, Feb 19th, Friday
Abstract
Lung cancer is the leading cancer killer across the world. It is well known that most cases with lung cancer are smokers, but all smokers do not have lung cancer. Today, the cancer community attributes this fact to possible epigenetic factors. Epigenetics is the study of chemical reactions that alter the functioning of the genome, without introducing mutations. Today, there are no mathematical models that capture the epigenetic impact on tumor-genesis in the lungs from the act of smoking. In this work, we choose a game theoretic approach to model one plausible biological phenomena that specifically is highly correlated with adenocarcinoma, among the most prevalent type of lung cancer. We show that, modeling the actual science provides better predictions than supervised approaches to predicting cancer using gene expression measures.
Speaker Bio
Arjun P. Athreya is a doctoral candidate in the department of Electrical and Computer Engineering at the Univ. Illinois at Urbana-Champaign, advised by Prof. Ravishankar K. Iyer in the DEPEND laboratory at CSL. His research interests are in applying statistical methods on clinical and biological big data to bring predictability in clinical therapeutics in the context such as cancer, complications in endocrinology and major depressive disorders. Arjun is a NCSA-CompGen fellow at the Univ. Illinois, and collaborates with Prof. Derek Wildman, IGB and Drs. Richard Weinshilboum, Liewei Wang and Rani Kalari of Mayo Clinic in this inter-disciplinary research under the CompGen Initiative and Mayo-Illinois Alliance. Prior coming to Univ. Illinois, Arjun received his MS in ECE from Carnegie Mellon University, with interests in networks and system security.