Session 23: Decision making, operations research and statistical learning

Session title: Decision making, operations research and statistical learning
Organizer: Cynthia Rudin (Duke)
Chair: Cynthia Rudin (Duke)
Time: June 5th, 8:30am – 10:00am
Location: VEC 1403

Speech 1: Online Learning of Buyer Behavior under Realistic Pricing Restrictions
Speaker:  Theja Tulabandhula (UIC)
Abstract: We propose a new efficient online algorithm to learn the parameters governing the purchasing behavior of a utility maximizing buyer, who responds to prices, in a repeated interaction setting. The key feature of our algorithm is that it can learn even non-linear buyer utility while working with arbitrary price constraints that the seller may impose. This overcomes a major shortcoming of previous approaches, which use unrealistic prices to learn these parameters making them unsuitable in practice.

Speech 2: Smart “Predict, then Optimize”
Speaker: Adam Elmachtoub (Columbia)
Abstract:  We consider a class of optimization problems where the objective function is not explicitly provided, but contextual information can be used to predict the objective based on historical data. A traditional approach would be to simply predict the objective based on minimizing prediction error, and then solve the corresponding optimization problem. Instead, we propose a prediction framework that leverages the structure of the optimization problem that will be solved given the prediction. We provide theoretical, algorithmic, and computational results to show the validity and practicality of our framework. This is joint work with Paul Grigas (UC Berkeley).

Speech 3: P-splines with an l1 penalty for repeated measures
Speaker: Brian Segal (Flatiron Health)
Abstract: P-splines are penalized B-splines, in which finite order differences in coefficients are typically penalized with an l2 norm. P-splines can be used for semiparametric regression and can include random effects to account for within-subject variability. In addition to l2 penalties, l1-type penalties have been used in nonparametric and semiparametric regression to achieve greater flexibility, such as in locally adaptive regression splines, l1 trend filtering, and the fused lasso additive model. However, there has been less focus on using l1 penalties in P-splines, particularly for estimating conditional means. We demonstrate the potential benefits of using an l1 penalty in P-splines with an emphasis on fitting non-smooth functions. We propose an estimation procedure using the alternating direction method of multipliers and cross validation, and provide degrees of freedom and approximate confidence bands based on a ridge approximation to the l1 penalized fit. We also demonstrate potential uses through simulations and an application to electrodermal activity data collected as part of a stress study.