Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
Publications
Publications
  • Article
  • Proceedings of the National Academy of Sciences

Eliminating Unintended Bias in Personalized Policies Using Bias-Eliminating Adapted Trees (BEAT)

By: Eva Ascarza and Ayelet Israeli
  • Format:Print
ShareBar

Abstract

An inherent risk of algorithmic personalization is disproportionate targeting of individuals from certain groups (or demographic characteristics such as gender or race), even when the decision maker does not intend to discriminate based on those “protected” attributes. This unintended discrimination is often caused by underlying correlations in the data between protected attributes and other observed characteristics used by the algorithm (or machine learning (ML) tool) to create predictions and target individuals optimally. Because these correlations are hidden in high dimensional data, removing protected attributes from the database does not solve the discrimination problem; instead, removing those attributes often exacerbates the problem by making it undetectable and, in some cases, even increases the bias generated by the algorithm.

We propose BEAT (Bias-Eliminating Adapted Trees) to address these issues. This approach allows decision makers to target individuals based on differences in their predicted behavior—hence capturing value from personalization—while ensuring a balanced allocation of resources across individuals, guaranteeing both group and individual fairness. Essentially, the method only extracts heterogeneity in the data that is unrelated to protected attributes. To do so, we build on the General Random Forest (GRF) framework (Wager and Athey 2018; Athey et al. 2019) and develop a targeting allocation that is “balanced” with respect to protected attributes. We validate BEAT using simulations and an online experiment with N=3,146 participants. This approach can be applied to any type of allocation decision that is based on prediction algorithms, such as medical treatments, hiring decisions, product recommendations, or dynamic pricing.

Keywords

Algorithm Bias; Personalization; Targeting; Generalized Random Forests (GRF); Discrimination; Customization and Personalization; Decision Making; Fairness; Mathematical Methods

Citation

Ascarza, Eva, and Ayelet Israeli. "Eliminating Unintended Bias in Personalized Policies Using Bias-Eliminating Adapted Trees (BEAT)." e2115126119. Proceedings of the National Academy of Sciences 119, no. 11 (March 8, 2022).
  • Read Now

About The Authors

Eva Ascarza

Marketing
→More Publications

Ayelet Israeli

Marketing
→More Publications

More from the Authors

    • December 2022 (Revised January 2023)
    • Faculty Research

    Cann: High Hopes for Cannabis Infused Beverages

    By: Ayelet Israeli and Anne V. Wilson
    • November 2022
    • Faculty Research

    The Future of E-Commerce: Lessons from the Livestream Wars in China

    By: Ayelet Israeli, Jeremy Yang and Billy Chan
    • November–December 2022
    • Marketing Science

    The Value of Descriptive Analytics: Evidence from Online Retailers

    By: Ron Berman and Ayelet Israeli
More from the Authors
  • Cann: High Hopes for Cannabis Infused Beverages By: Ayelet Israeli and Anne V. Wilson
  • The Future of E-Commerce: Lessons from the Livestream Wars in China By: Ayelet Israeli, Jeremy Yang and Billy Chan
  • The Value of Descriptive Analytics: Evidence from Online Retailers By: Ron Berman and Ayelet Israeli
ǁ
Campus Map
Harvard Business School
Soldiers Field
Boston, MA 02163
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Accessibility
  • Digital Accessibility
Copyright © President & Fellows of Harvard College