Session 8: Supervised and unsupervised learning of complex data

Session titleSupervised and unsupervised learning of complex data
Organizer: Junhui Wang (Citi U of HK)
Chair: Junhui Wang (Citi U of HK)
Time: June 4th, 11:00am-12:30pm
Location: VEC 405

Speech 1: Systems of partially linear models with gradient boosting
Speaker: Yongzhao Shao (NYU)
Abstract: We develop systems partially linear models with gradient boosting for prediction in multicenter studies or regression-based clustering in large scale data. Simultaneous variable selection and effect estimation are achieved using LASSO type penalty functions and ADMM. Simulation studies and real data examples are used to illustrate effectiveness of the proposed methods.

Speech 2: Supervised Dimensionality Reduction for Exponential Family Data
Speaker: Yoonkyung Lee (OSU)
Abstract: Supervised dimensionality reduction techniques, such as partial least squares and supervised principal components, are powerful tools for making predictions with a large number of variables. The implicit squared error terms in the objectives, however, make it less attractive to non-Gaussian data, either in the covariates or the responses. Drawing on a connection between partial least squares and the Gaussian distribution, we show how partial least squares can be extended to other members of the exponential family – similar to the generalized linear model – for both the covariates and the responses. Unlike previous attempts, our extension gives latent variables which are easily interpretable as linear functions of the data and is computationally efficient. In particular, it does not require additional optimization for the scores of new observations and therefore predictions can be made in real time. This is joint work with Andrew Landgraf at Battelle Memorial Institute.

Speech 3:  Transform-based unsupervised point registration and unseeded low-rank graph matching
Speaker: Yuan Zhang (OSU)
Abstract: 

Unsupervised estimation of the correspondence between two point sets has long been an attractive topic to CS and EE researchers.  In this paper, we focus on the vanilla form of the problem: matching two point sets that are identical over a linear transformation.  The problem is well-studied and many classical algorithms exist, yet, many of them suffer one or several of the following shortcomings:  slow computation on large data sets, limited applicable distribution families and lack of theoretical analysis.  Arguably, the bottleneck of computation lies in the need of many methods to evaluate n^2 many pairwise similarity or distance measures in each iteration.  Also, few results exist in bounding the error of some specific point matching algorithm, where dependence might be a main obstacle.
In this paper, we propose a novel method using Laplace transformation to directly match the underlying distributions of the two point sets.  Our method is fast because it avoids the n^2 many pairwise evaluations in iterations.  On the theory side, we propose a new error bound on the Wasserstein distance between two distributions in terms of the integrated difference between their Laplace transforms.  Based on this, we can establish consistency of our method.   Our method is also distinct by its versatility in handling a wide range of distribution families, while most existing methods typically require the data generating distributions to be continuous.
We then show how our method applies to the problem of match up nodes in two low-rank networks such that the aligned networks “look similar”.  Numerical comparisons illustrate our method’s significant advantages in both speed and accuracy over existing methods.