Session title: Recent advances in high-dimensional data
Organizer: Cunhui Zhang (Rutgers)
Chair: Sijian Wang (Rutgers)
Time: June 5th, 3:15pm – 4:45pm
Location: VEC 1202
Speech 1: The noise barrier and the large signal bias of the Lasso and other convex estimators
Speaker: Pierre Bellec (Rutgers)
Abstract: Convex estimators such as the Lasso, the matrix Lasso and the group Lasso have been studied extensively in the last two decades, demonstrating great success in both theory and practice. This paper introduces two quantities, the noise barrier and the large scale bias, that provides novel insights on the performance of these convex regularized estimators. In sparse linear regression, it is now well understood that the Lasso achieves fast prediction rates, provided that the correlations of the design satisfy some Restricted Eigenvalue or Compatibility condition, and provided that the tuning parameter is at least larger than some universal threshold. Using the two quantities introduced in the paper, we show that the compatibility condition on the design matrix is actually unavoidable to achieve fast prediction rates with the Lasso. In other words, the $\ell_1$-regularized Lasso must incur a loss due to the correlations of the design matrix, measured in terms of the compatibility constant. This results holds for any design matrix, any active subset of covariates, and any positive tuning parameter.It is now well known that the Lasso enjoys a dimension reduction property: if the target vector is $s$-sparse, the prediction rate of the Lasso with tuning parameter $\lambda$ is of order \lambda\sqrt s$, even if the ambient dimension $p$ is much larger than $p$. Such results require that the tuning parameters is greater than some universal threshold. We characterize sharp phase transitions for the tuning parameter of the Lasso around a critical threshold dependent on $s$. If $\lambda$ is equal or larger than this critical threshold, the Lasso is minimax over $s$-sparse target vectors. If $\lambda$ is equal or smaller than critical threshold, the Lasso incurs a loss of order $\sigma\sqrt s$ –which corresponds to a model of size $s$– even if the target vector is more sparse than $s$.Remarkably, the lower bounds obtained in the paper also apply to random, data-driven tuning parameters. Additionally, the results extend to convex penalties beyond the Lasso.
Speech 2: Factor-Driven Two-Regime Regression
Speaker: Yuan Liao (Rutgers)
Abstract: We propose a novel two-regime regression model, where the switching between the regimes is driven by a vector of possibly unobservable factors that are estimated from a much larger panel data set. Estimating this model brings new challenges in terms of both computationand asymptotic theory. We show that our optimization problem can be reformulated as Mixed Integer Optimization and present two alternative computational algorithms. We also derive the asymptotic distribution of the resulting estimator and find that the effectof estimating the factors results in a phase transition on the rates of convergence and asymptotic distributions. (Joint with Lee S, Seo M, and Shin Y)
Speech 3: Network Analysis by SCORE
Speaker: Jiashun Jin (CMU)
Abstract: We have collected a data set for the networks of statisticians, consisting of titles, authors, abstracts, MSC numbers, keywords, and citation counts of papers published in representative journals in statistics and related fields. In Phase I of our study, the data set covers all published papers from 2003 to 2012 in Annals of Statistics, Biometrika, JASA, and JRSS-B. In Phase II of our study, the data set covers all published papers in 36 journals in statistics and related fields, spanning 40 years. The data sets motivate an array of interesting problems, and for the talk, I will focus on two closely related problems: network community detection, and network membership estimation. We tackle these problems with the recent approach of Spectral Clustering On Ratioed Eigenvectors (SCORE), reveal a surprising simplex structure underlying the networks, and explain why SCORE is the right approach. We use the methods to investigate the Phase I data and report some of the results.We also report some Exploratory Data Analysis (EDA) results including productivity, journal-journal citations, and citation patterns. This part of result is based on Phase II of our data set (ready for use not very long ago).