Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
Publications
Publications
  • Article
  • Advances in Neural Information Processing Systems (NeurIPS)

Beyond Individualized Recourse: Interpretable and Interactive Summaries of Actionable Recourses

By: Kaivalya Rawal and Himabindu Lakkaraju
  • Format:Print
ShareBar

Abstract

As predictive models are increasingly being deployed in high-stakes decision-making, there has been a lot of interest in developing algorithms which can provide recourses to affected individuals. While developing such tools is important, it is even more critical to analyse and interpret a predictive model, and vet it thoroughly to ensure that the recourses it offers are meaningful and non-discriminatory before it is deployed in the real world. To this end, we propose a novel model agnostic framework called Actionable Recourse Summaries (AReS) to construct global counterfactual explanations which provide an interpretable and accurate summary of recourses for the entire population. We formulate a novel objective which simultaneously optimizes for correctness of the recourses and interpretability of the explanations, while minimizing overall recourse costs across the entire population. More specifically, our objective enables us to learn, with optimality guarantees on recourse correctness, a small number of compact rule sets each of which capture recourses for well defined subpopulations within the data. We also demonstrate theoretically that several of the prior approaches proposed to generate recourses for individuals are special cases of our framework. Experimental evaluation with real world datasets and user studies demonstrate that our framework can provide decision makers with a comprehensive overview of recourses corresponding to any black box model, and consequently help detect undesirable model biases and discrimination.

Keywords

Predictive Models; Decision Making; Framework; Mathematical Methods

Citation

Rawal, Kaivalya, and Himabindu Lakkaraju. "Beyond Individualized Recourse: Interpretable and Interactive Summaries of Actionable Recourses." Advances in Neural Information Processing Systems (NeurIPS) 33 (2020).
  • Read Now

About The Author

Himabindu Lakkaraju

Technology and Operations Management
→More Publications

More from the Authors

    • 2024
    • Faculty Research

    Fair Machine Unlearning: Data Removal while Mitigating Disparities

    By: Himabindu Lakkaraju, Flavio Calmon, Jiaqi Ma and Alex Oesterling
    • 2024
    • Faculty Research

    Quantifying Uncertainty in Natural Language Explanations of Large Language Models

    By: Himabindu Lakkaraju, Sree Harsha Tanneru and Chirag Agarwal
    • 2023
    • Advances in Neural Information Processing Systems (NeurIPS)

    Post Hoc Explanations of Language Models Can Improve Language Models

    By: Satyapriya Krishna, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh and Himabindu Lakkaraju
More from the Authors
  • Fair Machine Unlearning: Data Removal while Mitigating Disparities By: Himabindu Lakkaraju, Flavio Calmon, Jiaqi Ma and Alex Oesterling
  • Quantifying Uncertainty in Natural Language Explanations of Large Language Models By: Himabindu Lakkaraju, Sree Harsha Tanneru and Chirag Agarwal
  • Post Hoc Explanations of Language Models Can Improve Language Models By: Satyapriya Krishna, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh and Himabindu Lakkaraju
ǁ
Campus Map
Harvard Business School
Soldiers Field
Boston, MA 02163
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Accessibility
  • Digital Accessibility
Copyright © President & Fellows of Harvard College.