Session 40: Modern Approaches for Inference and Estimation

Session title: Modern Approaches for Inference and Estimation
Organizer: Genevera Allen (Rice)
Chair: Genevera Allen (Rice)
Time: June 6th, 1:15pm – 2:45pm
Location: VEC 404/405

Speech 1: High-Dimensional Propensity Score Estimation via Covariate Balancing
Speaker: Yang Ning (Cornell)
Abstract: 
In this paper, we address the problem of estimating the average treatment effect (ATE) and the average treatment effect for the treated (ATT) in observational studies when the number of potential confounders is possibly much greater than the sample size. In particular, we develop a robust method to estimate the propensity score via covariate balancing in high-dimensional settings. Since it is usually impossible to obtain the exact covariate balance in high dimension, we propose to estimate the propensity score by balancing a carefully selected subset of covariates that are predictive of the outcome under the assumption that the outcome model is linear and sparse. The estimated propensity score is, then, used for the Horvitz-Thompson estimator to infer the ATE and ATT. We prove that the proposed methodology has the desired properties such as sample boundedness, root-$n$ consistency, asymptotic normality, and semiparametric efficiency. We then extend these results to the case where the outcome model is a sparse generalized linear model. More importantly, we show that the proposed estimator is robust to model misspecification. Finally, we conduct simulation studies to evaluate the finite-sample performance of the proposed methodology, and apply it to estimate the causal effects of college attendance on adulthood political participation. Open-source software is available for implementing the proposed methodology. This is the joint work with Peng and Imai.

Speech 2: Interactive algorithms for graphical model selection
Speaker: Gautam Dasarthy (Rice University)
Abstract: 
With rapid progress in our ability to acquire, process, and learn from data, the true democratization of data-driven intelligence has never seemed closer. Unfortunately, there is a catch. Machine learning algorithms have traditionally been designed independently of the systems that acquire data. As a result, there is a fundamental disconnect between their promise and their real-world applicability. An urgent need has therefore emerged for integrating the design of learning and acquisition systems.
In this talk, I will present an approach for addressing this learning-acquisition disconnect using interactive machine learning methods. In particular, I will consider the problem of learning graphical model structure in high dimensions. This will highlight how traditional (open loop) methods do not take into account data acquisition constraints that arise in applications ranging from sensor networks to calcium imaging of the brain. I will then demonstrate how one can close this loop using techniques from interactive machine learning. I will conclude by discussing several connections to post-selection inference in this context.

Speech 3: AdaPT: An interactive procedure for multiple testing with side information
Speaker: Will Fithian (UCB)
Abstract: 
We consider the problem of multiple hypothesis testing with generic side information: for each hypothesis we observe both a p-value and some predictor encoding contextual information about the hypothesis. For large-scale problems, adaptively focusing power on the more promising hypotheses (those more likely to yield discoveries) can lead to much more powerful multiple testing procedures. We propose a general iterative framework for this problem, called the Adaptive p-value Thresholding (AdaPT) procedure, which adaptively estimates a Bayes-optimal p-value rejection threshold and controls the false discovery rate (FDR) in finite samples. At each iteration of the procedure, the analyst proposes a rejection threshold and observes partially censored p-values, estimates the false discovery proportion (FDP) below the threshold, and either stops to reject or proposes another threshold, until the estimated FDP is below α. Our procedure is adaptive in an unusually strong sense, permitting the analyst to use any statistical or machine learning method she chooses to estimate the optimal threshold, and to switch between different models at each iteration as information accrues.
This is joint work with Lihua Lei.