Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
Publications
Publications
  • 2023
  • Article
  • Proceedings of the AAAI Conference on Artificial Intelligence

Provable Detection of Propagating Sampling Bias in Prediction Models

By: Pavan Ravishankar, Qingyu Mo, Edward McFowland III and Daniel B. Neill
  • Format:Electronic
  • | Pages:8
ShareBar

Abstract

With an increased focus on incorporating fairness in machine learning models, it becomes imperative not only to assess and mitigate bias at each stage of the machine learning pipeline but also to understand the downstream impacts of bias across stages. Here we consider a general, but realistic, scenario in which a predictive model is learned from (potentially biased) training data, and model predictions are assessed post-hoc for fairness by some auditing method. We provide a theoretical analysis of how a specific form of data bias, differential sampling bias, propagates from the data stage to the prediction stage. Unlike prior work, we evaluate the downstream impacts of data biases quantitatively rather than qualitatively and prove theoretical guarantees for detection. Under reasonable assumptions, we quantify how the amount of bias in the model predictions varies as a function of the amount of differential sampling bias in the data, and at what point this bias becomes provably detectable by the auditor. Through experiments on two criminal justice datasets-- the well-known COMPAS dataset and historical data from NYPD's stop and frisk policy-- we demonstrate that the theoretical results hold in practice even when our assumptions are relaxed.

Citation

Ravishankar, Pavan, Qingyu Mo, Edward McFowland III, and Daniel B. Neill. "Provable Detection of Propagating Sampling Bias in Prediction Models." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 8 (2023): 9562–9569. (Presented at the 37th AAAI Conference on Artificial Intelligence (2/7/23-2/14/23) in Washington, DC.)
  • Read Now

About The Author

Edward McFowland III

Technology and Operations Management
→More Publications

More from the Authors

    • March 2025
    • Information and Organization

    Novice Risk Work: How Juniors Coaching Seniors on Emerging Technologies Such as Generative AI Can Lead to Learning Failures

    By: Katherine C. Kellogg, Hila Lifshitz-Assaf, Steven Randazzo, Ethan Mollick, Fabrizio Dell'Acqua, Edward McFowland III, François Candelon and Karim R. Lakhani
    • May 2024
    • Faculty Research

    Pernod Ricard: Uncorking Digital Transformation

    By: Iavor Bojinov, Edward McFowland III, François Candelon, Nikolina Jonsson and Emer Moloney
    • January 2024
    • Bioinformatics

    Subset Scanning for Multi-Trait Analysis Using GWAS Summary Statistics

    By: Rui Cao, Evan Olawsky, Edward McFowland III, Erin Marcotte, Logan Spector and Tianzhong Yang
More from the Authors
  • Novice Risk Work: How Juniors Coaching Seniors on Emerging Technologies Such as Generative AI Can Lead to Learning Failures By: Katherine C. Kellogg, Hila Lifshitz-Assaf, Steven Randazzo, Ethan Mollick, Fabrizio Dell'Acqua, Edward McFowland III, François Candelon and Karim R. Lakhani
  • Pernod Ricard: Uncorking Digital Transformation By: Iavor Bojinov, Edward McFowland III, François Candelon, Nikolina Jonsson and Emer Moloney
  • Subset Scanning for Multi-Trait Analysis Using GWAS Summary Statistics By: Rui Cao, Evan Olawsky, Edward McFowland III, Erin Marcotte, Logan Spector and Tianzhong Yang
ǁ
Campus Map
Harvard Business School
Soldiers Field
Boston, MA 02163
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Accessibility
  • Digital Accessibility
Copyright © President & Fellows of Harvard College.