Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
Publications
Publications
  • 2019
  • Article
  • Proceedings of the Conference on Fairness, Accountability, and Transparency

Fair Algorithms for Learning in Allocation Problems

By: Hadi Elzayn, Shahin Jabbari, Christopher Jung, Michael J Kearns, Seth Neel, Aaron Leon Roth and Zachary Schutzman
  • Format:Electronic
ShareBar

Abstract

Settings such as lending and policing can be modeled by a centralized agent allocating a scarce resource (e.g. loans or police officers) amongst several groups, in order to maximize some objective (e.g. loans given that are repaid, or criminals that are apprehended). Often in such problems fairness is also a concern. One natural notion of fairness, based on general principles of equality of opportunity, asks that conditional on an individual being a candidate for the resource in question, the probability of actually receiving it is approximately independent of the individual's group. For example, in lending this would mean that equally creditworthy individuals in different racial groups have roughly equal chances of receiving a loan. In policing it would mean that two individuals committing the same crime in different districts would have roughly equal chances of being arrested.
In this paper, we formalize this general notion of fairness for allocation problems and investigate its algorithmic consequences. Our main technical results include an efficient learning algorithm that converges to an optimal fair allocation even when the allocator does not know the frequency of candidates (i.e. creditworthy individuals or criminals) in each group. This algorithm operates in a censored feedback model in which only the number of candidates who received the resource in a given allocation can be observed, rather than the true number of candidates in each group. This models the fact that we do not learn the creditworthiness of individuals we do not give loans to and do not learn about crimes committed if the police presence in a district is low. As an application of our framework and algorithm, we consider the predictive policing problem, in which the resource being allocated to each group is the number of police officers assigned to each district. The learning algorithm is trained on arrest data gathered from its own deployments on previous days, resulting in a potential feedback loop that our algorithm provably overcomes. In this case, the fairness constraint asks that the probability that an individual who has committed a crime is arrested should be independent of the district in which they live. We investigate the performance of our learning algorithm on the Philadelphia Crime Incidents dataset.

Keywords

Allocation Problems; Algorithms; Fairness; Learning

Citation

Elzayn, Hadi, Shahin Jabbari, Christopher Jung, Michael J Kearns, Seth Neel, Aaron Leon Roth, and Zachary Schutzman. "Fair Algorithms for Learning in Allocation Problems." Proceedings of the Conference on Fairness, Accountability, and Transparency (2019): 170–179.
  • Register to Read

About The Author

Seth Neel

Technology and Operations Management
→More Publications

More from the Authors

    • September 2022 (Revised October 2022)
    • Faculty Research

    Data Privacy in Practice at LinkedIn

    By: Iavor Bojinov, Marco Iansiti and Seth Neel
    • 2022
    • Symposium on Foundations of Responsible Computing (FORC)

    Towards the Unification and Robustness of Post hoc Explanation Methods

    By: Sushant Agarwal, Shahin Jabbari, Chirag Agarwal, Sohini Upadhyay, Steven Wu and Himabindu Lakkaraju
    • 2022
    • Faculty Research

    The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective

    By: Satyapriya Krishna, Tessa Han, Alex Gu, Javin Pombra, Shahin Jabbari, Steven Wu and Himabindu Lakkaraju
More from the Authors
  • Data Privacy in Practice at LinkedIn By: Iavor Bojinov, Marco Iansiti and Seth Neel
  • Towards the Unification and Robustness of Post hoc Explanation Methods By: Sushant Agarwal, Shahin Jabbari, Chirag Agarwal, Sohini Upadhyay, Steven Wu and Himabindu Lakkaraju
  • The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective By: Satyapriya Krishna, Tessa Han, Alex Gu, Javin Pombra, Shahin Jabbari, Steven Wu and Himabindu Lakkaraju
ǁ
Campus Map
Harvard Business School
Soldiers Field
Boston, MA 02163
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Accessibility
  • Digital Accessibility
Copyright © President & Fellows of Harvard College