Keynote Speakers

Abstract

Due to rapid advances in DNA sequencing technologies, both in time as well as cost, the interpretation and use of genomic information is playing a significant role in several applications. Indeed, genomic information is being collected at great pace, depth and breadth. Unprecedented access to such information calls for sophisticated algorithms to make sense of the massive genomic data along with deriving possible actionable solutions from heterogeneous data sources in the different areas. In this talk, Dr. Utro will present some algorithm that we have developed in tree main area plant, population and cancer genomics, giving major emphasis to latter topic. He will also discuss my experience with designing a tool for guiding cancer treatment and understanding disease.

Bio

Dr. Filippo Utro is a research scientist at the Computational Biology Center at IBM T.J. Watson Research Center (NY – USA). Filippo joined IBM Research in 2011 as a post-doctoral researcher, after completing his PhD at University of Palermo (Italy) on Computer Science, and he was converted to research scientist in 2014. His research involves computational methods for analyzing various types of biological data. Dr. Utro started his career at IBM as an investigator in the cacao genome project and has since continued to research on plant, population and cancer genomics. His research interests include bioinformatics, algorithms and data structures and genomic medicine. Latest researches of Dr. Utro include the use of k-mer to detect relevant features in genomic and epigenomic data, analysis on polyploid plants and Watson for Genomics. Watson for Genomics look for variations in the full human genome and uses Watson’s cognitive capabilities to examine data sources such as treatment guidelines, research, clinical studies, journal articles and patient information.

Abstract

The late 20th century witnessed a major restructuring of the US electric power industry to facilitate the “competitive” procurement of electricity from generators competing to produce in open markets. In this lecture, we develop a mathematical framework to characterize the strategic behavior of generators in such markets under imperfect competition. In particular, we establish natural conditions under which Nash equilibria are guaranteed to exist for such markets, and derive a sharp upper bound on their price of anarchy (PoA). In addition to providing a structural characterization of a generator’s market power, the PoA bound we derive uncovers the possibility of a Braess-like paradox – that is to say, an increase in the power flow capacity of certain transmission lines can manifest in the reduction of social welfare at Nash equilibrium. We close with a discussion surrounding the practical implications of such results.

Bio

Eilyan Bitar is currently an Assistant Professor in the School of Electrical and Computer Engineering at Cornell University. Prior to joining Cornell in the Fall 2012, he was engaged as a Postdoctoral Fellow in the department of Computing + Mathematical Science (CMS) at the California Institute of Technology and at the University of California, Berkeley in Electrical Engineering and Computer Science during the 2011-12 academic year. A native Californian, he received both his Ph.D. (2011) and B.S. (2006) from the University of California, Berkeley. Professor Bitar’s research interests include modern power systems and electricity markets, stochastic control, and optimization.

Abstract

As transistors shrink to nanoscale dimensions, trapped electrons are making it difficult for digital computers to work. In contrast, the brain works fine with single-lane nanoscale devices that are intermittently blocked. Conjecturing that error-tolerance can be achieved by combining analog dendritic computation with digital axonal communication, neuromorphic engineers have created Neurogrid, the first neuromorphic system with billions of synaptic connections.

Bio

Prof. Kwabena Boahen received the B.S. and M.S.E. degrees in electrical and computer engineering from the Johns Hopkins University, Baltimore, MD, both in 1989 and the Ph.D. degree in computation and neural systems from the California Institute of Technology, Pasadena, in 1997. He was on the bioengineering faculty of the University of Pennsylvania from 1997 to 2005, where he held the first Skirkanich Term Junior Chair. He is presently a Professor in the Bioengineering Department of Stanford University, with a joint appointment in Electrical Engineering. He directs Stanford’s Brains in Silicon Laboratory, which develops silicon integrated circuits that emulate the way neurons compute, linking the seemingly disparate fields of electronics and computer science with neurobiology and medicine. He is an IEEE Fellow.

Abstract

A grand challenge in machine learning is the development of computational algorithms that match or outperform humans in perceptual inference tasks that are complicated by nuisance variation. For instance, visual object recognition involves the unknown object position, orientation, and scale in object recognition while speech recognition involves the unknown voice pronunciation, pitch, and speed. Recently, a new breed of deep learning algorithms have emerged for high-nuisance inference tasks that routinely yield pattern recognition systems with near- or super-human capabilities. But a fundamental question remains: Why do they work? Intuitions abound, but a coherent framework for understanding, analyzing, and synthesizing deep learning architectures has remained elusive. We answer this question by developing a new probabilistic framework for deep learning based on the Deep Rendering Model: a generative probabilistic model that explicitly captures latent nuisance variation. By relaxing the generative model to a discriminative one, we can recover two of the current leading deep learning systems, deep convolutional neural networks and random decision forests, providing insights into their successes and shortcomings, a principled route to their improvement, and new avenues for exploration.

Bio

Richard G. Baraniuk is the Victor E. Cameron Professor of Electrical and Computer Engineering at Rice University. His research interests lie in  new theory, algorithms, and hardware for sensing, signal processing, and machine learning. He is a Fellow of the IEEE, AAAS, and the National Academy of Inventors and has received national young investigator awards from the US NSF and ONR, the Rosenbaum Fellowship from the Isaac Newton Institute of Cambridge University, the ECE Young Alumni Achievement Award from the University of Illinois, the Wavelet Pioneer and Compressive Sampling Pioneer Awards from SPIE, the IEEE Signal Processing Society Best Paper Award, and the IEEE Signal Processing Society Technical Achievement Award. His work on the Rice single-pixel compressive camera has been widely reported in the popular press and was selected by MIT Technology Review as a TR10 Top 10 Emerging Technology. For his teaching and education projects, including Connexions (cnx.org) and OpenStax (openstaxcollege.org), he has received the C. Holmes MacDonald National Outstanding Teaching Award from Eta Kappa Nu, the Tech Museum of Innovation Laureate Award, the Internet Pioneer Award from the Berkman Center for Internet and Society at Harvard Law School, the World Technology Award for Education, the IEEE-SPS Education Award, the WISE Education Award, and the IEEE James H. Mulligan, Jr. Medal for Education.