Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
Publications
Publications
  • Article
  • Advances in Neural Information Processing Systems (NeurIPS)

Counterfactual Explanations Can Be Manipulated

By: Dylan Slack, Sophie Hilgard, Himabindu Lakkaraju and Sameer Singh
  • Format:Print
ShareBar

Abstract

Counterfactual explanations are useful for both generating recourse and auditing fairness between groups. We seek to understand whether adversaries can manipulate counterfactual explanations in an algorithmic recourse setting: if counterfactual explanations indicate both men and women must earn $100 more on average to receive a loan, can we be sure lower cost recourse does not exist for the men? By construction, we show that adversaries can design models for which counterfactual explanations generate similar cost recourses between groups. However, the same methods provide much lower cost recourses for specific subgroups in the data when the original instances are slightly perturbed, effectively hiding recourse disparities in models. We demonstrate vulnerabilities in a variety of counterfactual explanation techniques. On loan and violent crime prediction data sets, we train models where counterfactual explanations find up to 20x lower cost recourse for specific subgroups in the data. These results raise crucial concerns regarding the dependability of current counterfactual explanation techniques with adversarial actors, which we hope will inspire further investigations in robust and reliable counterfactual explanations.

Keywords

Machine Learning Models; Counterfactual Explanations

Citation

Slack, Dylan, Sophie Hilgard, Himabindu Lakkaraju, and Sameer Singh. "Counterfactual Explanations Can Be Manipulated." Advances in Neural Information Processing Systems (NeurIPS) 34 (2021).
  • Read Now

About The Author

Himabindu Lakkaraju

Technology and Operations Management
→More Publications

More from the Authors

    • 2023
    • Faculty Research

    When Algorithms Explain Themselves: AI Adoption and Accuracy of Experts' Decisions

    By: Himabindu Lakkaraju and Chiara Farronato
    • 2022
    • Advances in Neural Information Processing Systems (NeurIPS)

    Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations

    By: Tessa Han, Suraj Srinivas and Himabindu Lakkaraju
    • 2022
    • Advances in Neural Information Processing Systems (NeurIPS)

    Efficiently Training Low-Curvature Neural Networks

    By: Suraj Srinivas, Kyle Matoba, Himabindu Lakkaraju and Francois Fleuret
More from the Authors
  • When Algorithms Explain Themselves: AI Adoption and Accuracy of Experts' Decisions By: Himabindu Lakkaraju and Chiara Farronato
  • Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations By: Tessa Han, Suraj Srinivas and Himabindu Lakkaraju
  • Efficiently Training Low-Curvature Neural Networks By: Suraj Srinivas, Kyle Matoba, Himabindu Lakkaraju and Francois Fleuret
ǁ
Campus Map
Harvard Business School
Soldiers Field
Boston, MA 02163
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Accessibility
  • Digital Accessibility
Copyright © President & Fellows of Harvard College