Seminar

Methods for Interpretable Machine Learning - Cynthia Rudin

Date
Mon May 5th 2014, 12:45pm
Event Sponsor
the Institute for Research in the Social Sciences (IRiSS) and the Graduate School of Business (GSB)
Location
Room M109 in the McClelland Building (part of the Knight Management Center) of Stanford's Graduate School of Business
Methods for Interpretable Machine Learning - Cynthia Rudin

 Cynthia Rudin, Associate Professor of  Statistics at the Massachusetts Institute of Technology

Slides available here.

Abstract

It is extremely important in many application domains to have transparency in predictive modeling. Domain experts do not tend to prefer "black box" predictive model models. They would like to understand how predictions are made, and possibly, prefer models that emulate the way a human expert might make a decision, with a few important variables, and a clear convincing reason to make a particular prediction.

I will discuss recent work on interpretable predictive modeling with decision lists and sparse integer linear models. I will describe several approaches, the first using Bayesian analysis and another on discrete optimization. I will show examples of interpretable models for stroke prediction in medical patients and prediction of violent crime in young people raised in out-of-home care. I will also give an overview of some of the current work going on in the Prediction Analysis Lab.
- Collaborators are: Ben Letham, Berk Ustun, Stefano Traca, Tyler McCormick, and David Madigan

Contact Phone Number