Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
Publications
Publications
  • Article
  • Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society

Faithful and Customizable Explanations of Black Box Models

By: Himabindu Lakkaraju, Ece Kamar, Rich Caruana and Jure Leskovec
  • Format:Print
ShareBar

Abstract

As predictive models increasingly assist human experts (e.g., doctors) in day-to-day decision making, it is crucial for experts to be able to explore and understand how such models behave in different feature subspaces in order to know if and when to trust them. To this end, we propose Model Understanding through Subspace Explanations (MUSE), a novel model agnostic framework which facilitates understanding of a given black box model by explaining how it behaves in subspaces characterized by certain features of interest. Our framework provides end users (e.g., doctors) with the flexibility of customizing the model explanations by allowing them to input the features of interest. The construction of explanations is guided by a novel objective function that we propose to simultaneously optimize for fidelity to the original model, unambiguity and interpretability of the explanation. More specifically, our objective allows us to learn, with optimality guarantees, a small number of compact decision sets each of which captures the behavior of a given black box model in unambiguous, well-defined regions of the feature space. Experimental evaluation with real-world datasets and user studies demonstrate that our approach can generate customizable, highly compact, easy-to-understand, yet accurate explanations of various kinds of predictive models compared to state-of-the-art baselines.

Keywords

Interpretable Machine Learning; Black Box Models; Decision Making; Framework

Citation

Lakkaraju, Himabindu, Ece Kamar, Rich Caruana, and Jure Leskovec. "Faithful and Customizable Explanations of Black Box Models." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (2019).
  • Read Now

About The Author

Himabindu Lakkaraju

Technology and Operations Management
→More Publications

More from the Authors

    • 2023
    • Faculty Research

    When Algorithms Explain Themselves: AI Adoption and Accuracy of Experts' Decisions

    By: Himabindu Lakkaraju and Chiara Farronato
    • 2023
    • Scientific Data

    Evaluating Explainability for Graph Neural Networks

    By: Chirag Agarwal, Owen Queen, Himabindu Lakkaraju and Marinka Zitnik
    • 2022
    • Advances in Neural Information Processing Systems (NeurIPS)

    Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations

    By: Tessa Han, Suraj Srinivas and Himabindu Lakkaraju
More from the Authors
  • When Algorithms Explain Themselves: AI Adoption and Accuracy of Experts' Decisions By: Himabindu Lakkaraju and Chiara Farronato
  • Evaluating Explainability for Graph Neural Networks By: Chirag Agarwal, Owen Queen, Himabindu Lakkaraju and Marinka Zitnik
  • Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations By: Tessa Han, Suraj Srinivas and Himabindu Lakkaraju
ǁ
Campus Map
Harvard Business School
Soldiers Field
Boston, MA 02163
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Accessibility
  • Digital Accessibility
Copyright © President & Fellows of Harvard College