Session 24: Novel inference approaches for complex data setting

Session title: Novel inference approaches for complex data setting
Organizer: Regina Liu (Rutgers)
Chair: Junhui Wang (City U. of Hong Kong)
Time: June 5th, 1:15pm – 2:45pm
Location: VEC 1202/1203

Speech 2: Stein Discrepancy Methods for Robust Estimation and Regression
Speaker: Emre Barut (George Washington University)  
Abstract: All statistical procedures highly depend on the modeling assumptions and how close these assumptions are to reality. This dependence is critical: Even the slightest deviation from assumptions can cause major instabilities during statistical estimation.

In order to mitigate issues arising from model mismatch, numerous methods have been developed in the area of robust statistics. However, these approaches are aimed at specific problems, such as heavy tailed or correlated errors. The lack of a holistic framework in robust regression results in a major problem for the data practitioner. That is, in order to build a robust statistical model, possible issues in the data have to be found and understood before conducting the analysis. In addition, the practitioner needs to have an understanding of which robust models can be applied in which situations.

In this talk, we propose a new framework for robust parameter estimation to address these issues. The new method relies on the Stein Discrepancy Measure, and the estimate is given as the empirical minimizer of a second order U-statistic. The approach provides a “silver bullet” that can be used in a range of problems. When estimating parameters in the exponential family, the estimate can be obtained by solving a convex problem. For parameter estimation, our approach significantly improves upon MLE when outliers are present, or when the model is misspecified. Furthermore, we show how the new estimator can be used for robust high dimensional covariance estimation. Extensions of the method for regression problems and its efficient computation by subsampling are also discussed.

Speech 2: Toward a sampling theory for statistical network analysis
Speaker: Harry Crane (Rutgers)
Abstract: Many classical network models (e.g., stochastic blockmodels, graphons, exponential random graph models) are ill-suited for modern applications because they implicitly assume that the data is obtained by an unrealistic sampling scheme, such as vertex selection or simple random vertex sampling.   More recent approaches (completely random measures and edge exchangeable models) improve somewhat upon these limitations, but leave plenty of room for further exploration of the role played by sampling in network analysis.  I present here a framework that is intended to overcome theoretical and practical issues arising from the use of ill-specified network models.  Within this framework I discuss how to incorporate the sampling scheme into statistical models in a way that is both flexible and insightful for modern network science applications.

Speech 3: Estimating a covariance function from fragments of functional data
Speaker: Aurore Delaigle (U of Melbourne)
Abstract: Functional data are often observed only partially, in the form of fragments. In that case, the standard approaches for estimating the covariance function do not work because entire parts of the domain are completely unobserved. In previous work, Delaigle and Hall (2013, 2016) have suggested ways of estimating the covariance function, based for example on Markov assumptions. In this work we take a completely different approach which does not rely on such assumptions. We show that, using a tensor product approach, it is possible to reconstruct the covariance function using observations located only on the diagonal of its domain.