Session 30: Interpretable modeling and understanding variables

Session title: Interpretable modeling and understanding variables
Organizer: Cynthia Rudin (Duke)
Chair: Cynthia Rudin (Duke)
Time: June 5th, 3:15pm – 4:45pm
Location: VEC 1203

Speech 1: Model Class Reliance: Variable Importance when all Models are Wrong, but *Many* are Useful.
Speaker: Aaron Fisher (Harvard)
Abstract: Variable importance (VI) tools are typically used to examine the inner workings of prediction models. However, many existing VI measures are not comparable across model types, can obscure implicit assumptions about the data generating distribution, or can give seemingly incoherent results when multiple prediction models fit the data well. In this paper we propose a framework of VI measures for describing how much any model class (e.g. all linear models of dimension p), any model-fitting algorithm (e.g. Ridge regression with fixed regularization parameter), or any individual prediction model (e.g. a single linear model with fixed coefficient vector), relies on covariate(s) of interest. The building block of our approach, Model Reliance (MR), compares a prediction model’s expected loss with that model’s expected loss on a pair of observations in which the value of the covariate of interest has been switched. Expanding on MR, we propose Model Class Reliance (MCR) as the upper and lower bounds on the degree to which any well-performing prediction model within a class may rely on a variable of interest, or set of variables of interest. Thus, MCR describes reliance on a variable while accounting for the fact that many prediction models, possibly of different parametric forms, may fit the data well. We give probabilistic bounds for MR and MCR, using existing results for U-statistics. These bounds can be generalized to create finite-sample confidence regions for the best-performing models from any class. We also illustrate connections between MR, conditional causal effects, and linear regression coefficients. We then apply MR & MCR in a public dataset of Broward County criminal records to study the reliance of recidivism prediction models on sex and race.

Speech 2: Feature-Efficient Multi-value Rule Sets for Interpretable Classification
Speaker: Tong Wang (U Iowa)
Abstract:

We present Multi-vAlue Rule Set (MARS) models for interpretable classification with feature efficient presentations. Compared to rule sets built from single-valued rules, MARS introduces a more generalized form of association rules that allows multiple values in a condition. Rules of this form are more concise than classical single-valued rules in capturing and describing patterns in data. Our formulation also pursues a higher efficiency of feature utilization, which reduces possible cost in data collection and storage. We propose a Bayesian framework for formulating a MARS model and propose an efficient inference method for learning a maximum a posteriori, incorporating theoretically grounded bounds to iteratively reduce the search space and improve the search efficiency. Experiments on synthetic and real-world data demonstrate that MARS models have significantly smaller complexity and fewer features than baseline models while being competitive in predictive accuracy. We conducted a usability study with human subjects and the results show that MARS is the easiest to understand compared with other competing rule-based models. We apply MARS to a real-world application to predict in-hospital mortality rate.

Speech 3: Recent Work on Interpretable Machine Learning Models
Speaker: Cynthia Rudin (Duke)
Abstract:  I will overview some work on interpretable machine learning models including: (i) one-sided decision trees (rule lists) that provably minimize accuracy and sparsity, (ii) falling rule lists, which are constrained one-sided decision trees, (iii) deep neural networks with an interpretable prototype layer, (iv) matching methods for causal inference that use machine learning to gain both interpretability and accuracy.