Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
  • Research
    • Research
    • Publications
    • Global Research Centers
    • Case Development
    • Initiatives & Projects
    • Research Services
    • Seminars & Conferences
    →
  • Publications→

Publications

Publications

Filter Results : (11) Arrow Down
Filter Results : (11) Arrow Down Arrow Up

Show Results For

  • All HBS Web  (70)
    • Faculty Publications  (11)

    Show Results For

    • All HBS Web  (70)
      • Faculty Publications  (11)

      Algorithmic Fairness Remove Algorithmic Fairness →

      Page 1 of 11 Results

      Are you looking for?

      → Search All HBS Web
      • Article

      Eliminating Unintended Bias in Personalized Policies Using Bias-Eliminating Adapted Trees (BEAT)

      By: Eva Ascarza and Ayelet Israeli

      An inherent risk of algorithmic personalization is disproportionate targeting of individuals from certain groups (or demographic characteristics such as gender or race), even when the decision maker does not intend to discriminate based on those “protected”...  View Details

      Keywords: Algorithm Bias; Personalization; Targeting; Generalized Random Forests (GRF); Discrimination; Customization and Personalization; Decision Making; Fairness; Mathematical Methods
      Citation
      Read Now
      Related
      Ascarza, Eva, and Ayelet Israeli. "Eliminating Unintended Bias in Personalized Policies Using Bias-Eliminating Adapted Trees (BEAT)." e2115126119. Proceedings of the National Academy of Sciences 119, no. 11 (March 8, 2022).
      • Article

      Counterfactual Explanations Can Be Manipulated

      By: Dylan Slack, Sophie Hilgard, Himabindu Lakkaraju and Sameer Singh
      Counterfactual explanations are useful for both generating recourse and auditing fairness between groups. We seek to understand whether adversaries can manipulate counterfactual explanations in an algorithmic recourse setting: if counterfactual explanations indicate...  View Details
      Keywords: Machine Learning Models; Counterfactual Explanations
      Citation
      Read Now
      Related
      Slack, Dylan, Sophie Hilgard, Himabindu Lakkaraju, and Sameer Singh. "Counterfactual Explanations Can Be Manipulated." Advances in Neural Information Processing Systems (NeurIPS) 34 (2021).
      • 2021
      • Article

      Does Fair Ranking Improve Minority Outcomes? Understanding the Interplay of Human and Algorithmic Biases in Online Hiring

      By: Tom Sühr, Sophie Hilgard and Himabindu Lakkaraju
      Ranking algorithms are being widely employed in various online hiring platforms including LinkedIn, TaskRabbit, and Fiverr. Prior research has demonstrated that ranking algorithms employed by these platforms are prone to a variety of undesirable biases, leading to the...  View Details
      Citation
      Read Now
      Related
      Sühr, Tom, Sophie Hilgard, and Himabindu Lakkaraju. "Does Fair Ranking Improve Minority Outcomes? Understanding the Interplay of Human and Algorithmic Biases in Online Hiring." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society 4th (2021).
      • 2021
      • Article

      Fair Influence Maximization: A Welfare Optimization Approach

      By: Aida Rahmattalabi, Shahin Jabbari, Himabindu Lakkaraju, Phebe Vayanos, Max Izenberg, Ryan Brown, Eric Rice and Milind Tambe
      Several behavioral, social, and public health interventions, such as suicide/HIV prevention or community preparedness against natural disasters, leverage social network information to maximize outreach. Algorithmic influence maximization techniques have been proposed...  View Details
      Citation
      Read Now
      Related
      Rahmattalabi, Aida, Shahin Jabbari, Himabindu Lakkaraju, Phebe Vayanos, Max Izenberg, Ryan Brown, Eric Rice, and Milind Tambe. "Fair Influence Maximization: A Welfare Optimization Approach." Proceedings of the AAAI Conference on Artificial Intelligence 35th (2021).
      • 2021
      • Article

      Fair Algorithms for Infinite and Contextual Bandits

      By: Matthew Joseph, Michael J Kearns, Jamie Morgenstern, Seth Neel and Aaron Leon Roth
      We study fairness in linear bandit problems. Starting from the notion of meritocratic fairness introduced in Joseph et al. [2016], we carry out a more refined analysis of a more general problem, achieving better performance guarantees with fewer modelling assumptions...  View Details
      Keywords: Algorithms; Bandit Problems; Fairness; Mathematical Methods
      Citation
      Read Now
      Related
      Joseph, Matthew, Michael J Kearns, Jamie Morgenstern, Seth Neel, and Aaron Leon Roth. "Fair Algorithms for Infinite and Contextual Bandits." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society 4th (2021).
      • 2021
      • Conference Presentation

      An Algorithmic Framework for Fairness Elicitation

      By: Christopher Jung, Michael J. Kearns, Seth Neel, Aaron Leon Roth, Logan Stapleton and Zhiwei Steven Wu
      We consider settings in which the right notion of fairness is not captured by simple mathematical definitions (such as equality of error rates across groups), but might be more complex and nuanced and thus require elicitation from individual or collective stakeholders....  View Details
      Keywords: Algorithmic Fairness; Machine Learning; Fairness; Framework; Mathematical Methods
      Citation
      Read Now
      Related
      Jung, Christopher, Michael J. Kearns, Seth Neel, Aaron Leon Roth, Logan Stapleton, and Zhiwei Steven Wu. "An Algorithmic Framework for Fairness Elicitation." Paper presented at the 2nd Symposium on Foundations of Responsible Computing (FORC), 2021.
      • 2019
      • Article

      Fair Algorithms for Learning in Allocation Problems

      By: Hadi Elzayn, Shahin Jabbari, Christopher Jung, Michael J Kearns, Seth Neel, Aaron Leon Roth and Zachary Schutzman
      Settings such as lending and policing can be modeled by a centralized agent allocating a scarce resource (e.g. loans or police officers) amongst several groups, in order to maximize some objective (e.g. loans given that are repaid, or criminals that are apprehended)....  View Details
      Keywords: Allocation Problems; Algorithms; Fairness; Learning
      Citation
      Register to Read
      Related
      Elzayn, Hadi, Shahin Jabbari, Christopher Jung, Michael J Kearns, Seth Neel, Aaron Leon Roth, and Zachary Schutzman. "Fair Algorithms for Learning in Allocation Problems." Proceedings of the Conference on Fairness, Accountability, and Transparency (2019): 170–179.
      • 2019
      • Article

      An Empirical Study of Rich Subgroup Fairness for Machine Learning

      By: Michael J Kearns, Seth Neel, Aaron Leon Roth and Zhiwei Steven Wu
      Kearns et al. [2018] recently proposed a notion of rich subgroup fairness intended to bridge the gap between statistical and individual notions of fairness. Rich subgroup fairness picks a statistical fairness constraint (say, equalizing false positive rates across...  View Details
      Keywords: Machine Learning; Fairness; AI and Machine Learning
      Citation
      Read Now
      Related
      Kearns, Michael J., Seth Neel, Aaron Leon Roth, and Zhiwei Steven Wu. "An Empirical Study of Rich Subgroup Fairness for Machine Learning." Proceedings of the Conference on Fairness, Accountability, and Transparency (2019): 100–109.
      • Article

      Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness

      By: Michael J Kearns, Seth Neel, Aaron Leon Roth and Zhiwei Steven Wu
      The most prevalent notions of fairness in machine learning are statistical definitions: they fix a small collection of pre-defined groups, and then ask for parity of some statistic of the classifier (like classification rate or false positive rate) across these groups....  View Details
      Keywords: Machine Learning; Algorithms; Fairness; Mathematical Methods
      Citation
      Read Now
      Related
      Kearns, Michael J., Seth Neel, Aaron Leon Roth, and Zhiwei Steven Wu. "Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness." Proceedings of the International Conference on Machine Learning (ICML) 35th (2018).
      • 18 Nov 2016
      • Conference Presentation

      Rawlsian Fairness for Machine Learning

      By: Matthew Joseph, Michael J. Kearns, Jamie Morgenstern, Seth Neel and Aaron Leon Roth
      Motivated by concerns that automated decision-making procedures can unintentionally lead to discriminatory behavior, we study a technical definition of fairness modeled after John Rawls' notion of "fair equality of opportunity". In the context of a simple model of...  View Details
      Keywords: Machine Learning; Algorithms; Fairness; Decision Making; Mathematical Methods
      Citation
      Related
      Joseph, Matthew, Michael J. Kearns, Jamie Morgenstern, Seth Neel, and Aaron Leon Roth. "Rawlsian Fairness for Machine Learning." Paper presented at the 3rd Workshop on Fairness, Accountability, and Transparency in Machine Learning, Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD), November 18, 2016.
      • Research Summary

      Overview

      By: Himabindu Lakkaraju
      I develop machine learning tools and techniques which enable human decision makers to make better decisions. More specifically, my research addresses the following fundamental questions pertaining to human and algorithmic decision-making:

      1. How to build...  View Details
      Keywords: Artificial Intelligence; Machine Learning; Decision Analysis; Decision Support
      • 1

      Are you looking for?

      → Search All HBS Web
      ǁ
      Campus Map
      Harvard Business School
      Soldiers Field
      Boston, MA 02163
      →Map & Directions
      →More Contact Information
      • Make a Gift
      • Site Map
      • Jobs
      • Harvard University
      • Trademarks
      • Policies
      • Accessibility
      • Digital Accessibility
      Copyright © President & Fellows of Harvard College