Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
Publications
Publications
  • 2023
  • Article
  • Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI)

On Minimizing the Impact of Dataset Shifts on Actionable Explanations

By: Anna P. Meyer, Dan Ley, Suraj Srinivas and Himabindu Lakkaraju
  • | Pages:11
ShareBar

Abstract

The Right to Explanation is an important regulatory principle that allows individuals to request actionable explanations for algorithmic decisions. However, several technical challenges arise when providing such actionable explanations in practice. For instance, models are periodically retrained to handle dataset shifts. This process may invalidate some of the previously prescribed explanations, thus rendering them unactionable. But, it is unclear if and when such invalidations occur, and what factors determine explanation stability, i.e., if an explanation remains unchanged amidst model retraining due to dataset shifts. In this paper, we address the aforementioned gaps and provide one of the first theoretical and empirical characterizations of the factors influencing explanation stability. To this end, we conduct rigorous theoretical analysis to demonstrate that model curvature, weight decay parameters while training, and the magnitude of the dataset shift are key factors that determine the extent of explanation (in)stability. Extensive experimentation with real-world datasets not only validates our theoretical results, but also demonstrates that the aforementioned factors dramatically impact the stability of explanations produced by various state-of-the-art methods.

Keywords

Mathematical Methods; Analytics and Data Science

Citation

Meyer, Anna P., Dan Ley, Suraj Srinivas, and Himabindu Lakkaraju. "On Minimizing the Impact of Dataset Shifts on Actionable Explanations." Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI) 39th (2023): 1434–1444.
  • Read Now

About The Author

Himabindu Lakkaraju

Technology and Operations Management
→More Publications

More from the Authors

    • 2024
    • Faculty Research

    Fair Machine Unlearning: Data Removal while Mitigating Disparities

    By: Himabindu Lakkaraju, Flavio Calmon, Jiaqi Ma and Alex Oesterling
    • 2024
    • Faculty Research

    Quantifying Uncertainty in Natural Language Explanations of Large Language Models

    By: Himabindu Lakkaraju, Sree Harsha Tanneru and Chirag Agarwal
    • 2023
    • Advances in Neural Information Processing Systems (NeurIPS)

    Post Hoc Explanations of Language Models Can Improve Language Models

    By: Satyapriya Krishna, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh and Himabindu Lakkaraju
More from the Authors
  • Fair Machine Unlearning: Data Removal while Mitigating Disparities By: Himabindu Lakkaraju, Flavio Calmon, Jiaqi Ma and Alex Oesterling
  • Quantifying Uncertainty in Natural Language Explanations of Large Language Models By: Himabindu Lakkaraju, Sree Harsha Tanneru and Chirag Agarwal
  • Post Hoc Explanations of Language Models Can Improve Language Models By: Satyapriya Krishna, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh and Himabindu Lakkaraju
ǁ
Campus Map
Harvard Business School
Soldiers Field
Boston, MA 02163
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Accessibility
  • Digital Accessibility
Copyright © President & Fellows of Harvard College.