Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
Publications
Publications
  • 2022
  • Working Paper

Rethinking Explainability as a Dialogue: A Practitioner's Perspective

By: Himabindu Lakkaraju, Dylan Slack, Yuxin Chen, Chenhao Tan and Sameer Singh
  • Format:Print
  • | Language:English
  • | Pages:23
ShareBar

Abstract

As practitioners increasingly deploy machine learning models in critical domains such as healthcare, finance, and policy, it becomes vital to ensure that domain experts function effectively alongside these models. Explainability is one way to bridge the gap between human decision-makers and machine learning models. However, most of the existing work on explainability focuses on one-off, static explanations like feature importances or rule-lists. These sorts of explanations may not be sufficient for many use cases that require dynamic, continuous discovery from stakeholders that have a range of skills and expertise. In the literature, few works ask decision-makers such as doctors, healthcare professionals, and policymakers about the utility of existing explanations and other desiderata they would like to see in an explanation going forward. In this work, we address this gap and carry out a study where we interview doctors, healthcare professionals, and policymakers about their needs and desires for explanations. Our study indicates that decision-makers would strongly prefer interactive explanations. In particular, they would prefer these interactions to take the form of natural language dialogues. Domain experts wish to treat machine learning models as “another colleague”, i.e., one who can be held accountable by asking why they made a particular decision through expressive and accessible natural language interactions. Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations as a starting place for future work. Further, we show why natural language dialogues satisfy these principles and are a desirable way to build interactive explanations. Next, we provide a design of a dialogue system for explainability, and discuss the risks, trade-offs, and research opportunities of building these systems. Overall, we hope our work serves as a starting place for researchers and engineers to design interactive, natural language dialogue systems for explainability that better serve users’ needs.

Keywords

Natural Language Conversations; AI and Machine Learning; Experience and Expertise; Interactive Communication; Business and Stakeholder Relations

Citation

Lakkaraju, Himabindu, Dylan Slack, Yuxin Chen, Chenhao Tan, and Sameer Singh. "Rethinking Explainability as a Dialogue: A Practitioner's Perspective." Working Paper, 2022.
  • Read Now

About The Author

Himabindu Lakkaraju

Technology and Operations Management
→More Publications

More from the Authors

    • 2023
    • Faculty Research

    When Algorithms Explain Themselves: AI Adoption and Accuracy of Experts' Decisions

    By: Himabindu Lakkaraju and Chiara Farronato
    • 2022
    • Advances in Neural Information Processing Systems (NeurIPS)

    Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations

    By: Tessa Han, Suraj Srinivas and Himabindu Lakkaraju
    • 2022
    • Advances in Neural Information Processing Systems (NeurIPS)

    Efficiently Training Low-Curvature Neural Networks

    By: Suraj Srinivas, Kyle Matoba, Himabindu Lakkaraju and Francois Fleuret
More from the Authors
  • When Algorithms Explain Themselves: AI Adoption and Accuracy of Experts' Decisions By: Himabindu Lakkaraju and Chiara Farronato
  • Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations By: Tessa Han, Suraj Srinivas and Himabindu Lakkaraju
  • Efficiently Training Low-Curvature Neural Networks By: Suraj Srinivas, Kyle Matoba, Himabindu Lakkaraju and Francois Fleuret
ǁ
Campus Map
Harvard Business School
Soldiers Field
Boston, MA 02163
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Accessibility
  • Digital Accessibility
Copyright © President & Fellows of Harvard College