Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
Publications
Publications
  • 2022
  • Working Paper

Rethinking Explainability as a Dialogue: A Practitioner's Perspective

By: Himabindu Lakkaraju, Dylan Slack, Yuxin Chen, Chenhao Tan and Sameer Singh
  • Format:Print
  • | Language:English
  • | Pages:23
ShareBar

Abstract

As practitioners increasingly deploy machine learning models in critical domains such as healthcare, finance, and policy, it becomes vital to ensure that domain experts function effectively alongside these models. Explainability is one way to bridge the gap between human decision-makers and machine learning models. However, most of the existing work on explainability focuses on one-off, static explanations like feature importances or rule-lists. These sorts of explanations may not be sufficient for many use cases that require dynamic, continuous discovery from stakeholders that have a range of skills and expertise. In the literature, few works ask decision-makers such as doctors, healthcare professionals, and policymakers about the utility of existing explanations and other desiderata they would like to see in an explanation going forward. In this work, we address this gap and carry out a study where we interview doctors, healthcare professionals, and policymakers about their needs and desires for explanations. Our study indicates that decision-makers would strongly prefer interactive explanations. In particular, they would prefer these interactions to take the form of natural language dialogues. Domain experts wish to treat machine learning models as “another colleague”, i.e., one who can be held accountable by asking why they made a particular decision through expressive and accessible natural language interactions. Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations as a starting place for future work. Further, we show why natural language dialogues satisfy these principles and are a desirable way to build interactive explanations. Next, we provide a design of a dialogue system for explainability, and discuss the risks, trade-offs, and research opportunities of building these systems. Overall, we hope our work serves as a starting place for researchers and engineers to design interactive, natural language dialogue systems for explainability that better serve users’ needs.

Keywords

Natural Language Conversations; AI and Machine Learning; Experience and Expertise; Interactive Communication; Business and Stakeholder Relations

Citation

Lakkaraju, Himabindu, Dylan Slack, Yuxin Chen, Chenhao Tan, and Sameer Singh. "Rethinking Explainability as a Dialogue: A Practitioner's Perspective." Working Paper, 2022.
  • Read Now

About The Author

Himabindu Lakkaraju

Technology and Operations Management
→More Publications

More from the Authors

    • June 2023
    • Transactions on Machine Learning Research (TMLR)

    When Does Uncertainty Matter? Understanding the Impact of Predictive Uncertainty in ML Assisted Decision Making

    By: Sean McGrath, Parth Mehta, Alexandra Zytek, Isaac Lage and Himabindu Lakkaraju
    • 2023
    • Proceedings of the International Conference on Learning Representations (ICLR)

    Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse

    By: Martin Pawelczyk, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci and Himabindu Lakkaraju
    • April 2023
    • Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS)

    On the Privacy Risks of Algorithmic Recourse

    By: Martin Pawelczyk, Himabindu Lakkaraju and Seth Neel
More from the Authors
  • When Does Uncertainty Matter? Understanding the Impact of Predictive Uncertainty in ML Assisted Decision Making By: Sean McGrath, Parth Mehta, Alexandra Zytek, Isaac Lage and Himabindu Lakkaraju
  • Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse By: Martin Pawelczyk, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci and Himabindu Lakkaraju
  • On the Privacy Risks of Algorithmic Recourse By: Martin Pawelczyk, Himabindu Lakkaraju and Seth Neel
ǁ
Campus Map
Harvard Business School
Soldiers Field
Boston, MA 02163
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Accessibility
  • Digital Accessibility
Copyright © President & Fellows of Harvard College