Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
Publications
Publications
  • 2019
  • Article
  • Proceedings of the Conference on Fairness, Accountability, and Transparency

An Empirical Study of Rich Subgroup Fairness for Machine Learning

By: Michael J Kearns, Seth Neel, Aaron Leon Roth and Zhiwei Steven Wu
  • Format:Print
ShareBar

Abstract

Kearns et al. [2018] recently proposed a notion of rich subgroup fairness intended to bridge the gap between statistical and individual notions of fairness. Rich subgroup fairness picks a statistical fairness constraint (say, equalizing false positive rates across protected groups), but then asks that this constraint hold over an exponentially or infinitely large collection of subgroups defined by a class of functions with bounded VC dimension. They give an algorithm guaranteed to learn subject to this constraint, under the condition that it has access to oracles for perfectly learning absent a fairness constraint. In this paper, we undertake an extensive empirical evaluation of the algorithm of Kearns et al. On four real datasets for which fairness is a concern, we investigate the basic convergence of the algorithm when instantiated with fast heuristics in place of learning oracles, measure the tradeoffs between fairness and accuracy, and compare this approach with the recent algorithm of Agarwal et al. [2018], which implements weaker and more traditional marginal fairness constraints defined by individual protected attributes. We find that in general, the Kearns et al. algorithm converges quickly, large gains in fairness can be obtained with mild costs to accuracy, and that optimizing accuracy subject only to marginal fairness leads to classifiers with substantial subgroup unfairness. We also provide a number of analyses and visualizations of the dynamics and behavior of the Kearns et al. algorithm. Overall we find this algorithm to be effective on real data, and rich subgroup fairness to be a viable notion in practice.

Keywords

Machine Learning; Fairness; AI and Machine Learning

Citation

Kearns, Michael J., Seth Neel, Aaron Leon Roth, and Zhiwei Steven Wu. "An Empirical Study of Rich Subgroup Fairness for Machine Learning." Proceedings of the Conference on Fairness, Accountability, and Transparency (2019): 100–109.
  • Read Now

About The Author

Seth Neel

Technology and Operations Management
→More Publications

More from the Authors

    • 2023
    • Proceedings of the Conference on Empirical Methods in Natural Language Processing

    MoPe: Model Perturbation-based Privacy Attacks on Language Models

    By: Marvin Li, Jason Wang, Jeffrey Wang and Seth Neel
    • 2023
    • Faculty Research

    Black-box Training Data Identification in GANs via Detector Networks

    By: Lukman Olagoke, Salil Vadhan and Seth Neel
    • 2023
    • Faculty Research

    In-Context Unlearning: Language Models as Few Shot Unlearners

    By: Martin Pawelczyk, Seth Neel and Himabindu Lakkaraju
More from the Authors
  • MoPe: Model Perturbation-based Privacy Attacks on Language Models By: Marvin Li, Jason Wang, Jeffrey Wang and Seth Neel
  • Black-box Training Data Identification in GANs via Detector Networks By: Lukman Olagoke, Salil Vadhan and Seth Neel
  • In-Context Unlearning: Language Models as Few Shot Unlearners By: Martin Pawelczyk, Seth Neel and Himabindu Lakkaraju
ǁ
Campus Map
Harvard Business School
Soldiers Field
Boston, MA 02163
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Accessibility
  • Digital Accessibility
Copyright © President & Fellows of Harvard College.