Filter Results
:
(16)
Show Results For
-
All HBS Web
(64)
- Faculty Publications (16)
Show Results For
-
All HBS Web
(64)
- Faculty Publications (16)
Page 1 of
16
Results
- 2023
- Working Paper
Insufficiently Justified Disparate Impact: A New Criterion for Subgroup Fairness
By: Neil Menghani, Edward McFowland III and Daniel B. Neill
In this paper, we develop a new criterion, "insufficiently justified disparate impact" (IJDI), for assessing whether recommendations (binarized predictions) made by an algorithmic decision support tool are fair. Our novel, utility-based IJDI criterion evaluates false...
View Details
Menghani, Neil, Edward McFowland III, and Daniel B. Neill. "Insufficiently Justified Disparate Impact: A New Criterion for Subgroup Fairness." Working Paper, June 2023.
- 2023
- Working Paper
Auditing Predictive Models for Intersectional Biases
By: Kate S. Boxer, Edward McFowland III and Daniel B. Neill
Predictive models that satisfy group fairness criteria in aggregate for members of a protected class, but do not guarantee subgroup fairness, could produce biased predictions for individuals at the intersection of two or more protected classes. To address this risk, we...
View Details
Boxer, Kate S., Edward McFowland III, and Daniel B. Neill. "Auditing Predictive Models for Intersectional Biases." Working Paper, June 2023.
- 2023
- Article
Provable Detection of Propagating Sampling Bias in Prediction Models
By: Pavan Ravishankar, Qingyu Mo, Edward McFowland III and Daniel B. Neill
With an increased focus on incorporating fairness in machine learning models, it becomes imperative not only to assess and mitigate bias at each stage of the machine learning pipeline but also to understand the downstream impacts of bias across stages. Here we consider...
View Details
Ravishankar, Pavan, Qingyu Mo, Edward McFowland III, and Daniel B. Neill. "Provable Detection of Propagating Sampling Bias in Prediction Models." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 8 (2023): 9562–9569. (Presented at the 37th AAAI Conference on Artificial Intelligence (2/7/23-2/14/23) in Washington, DC.)
- 2023
- Working Paper
The Limits of Algorithmic Measures of Race in Studies of Outcome Disparities
By: David S. Scharfstein and Sergey Chernenko
We show that the use of algorithms to predict race has significant limitations in measuring and understanding the sources of racial disparities in finance, economics, and other contexts. First, we derive theoretically the direction and magnitude of measurement bias in...
View Details
Keywords:
Racial Disparity;
Paycheck Protection Program;
Measurement Error;
AI and Machine Learning;
Race;
Measurement and Metrics;
Equality and Inequality;
Prejudice and Bias;
Forecasting and Prediction;
Outcome or Result
Scharfstein, David S., and Sergey Chernenko. "The Limits of Algorithmic Measures of Race in Studies of Outcome Disparities." Working Paper, April 2023.
- 2022
- Working Paper
Improving Human-Algorithm Collaboration: Causes and Mitigation of Over- and Under-Adherence
By: Maya Balakrishnan, Kris Ferreira and Jordan Tong
Even if algorithms make better predictions than humans on average, humans may sometimes have “private” information which an algorithm does not have access to that can improve performance. How can we help humans effectively use and adjust recommendations made by...
View Details
Keywords:
Cognitive Biases;
Algorithm Transparency;
Forecasting and Prediction;
Behavior;
AI and Machine Learning;
Analytics and Data Science;
Cognition and Thinking
Balakrishnan, Maya, Kris Ferreira, and Jordan Tong. "Improving Human-Algorithm Collaboration: Causes and Mitigation of Over- and Under-Adherence." Working Paper, December 2022.
- October–December 2022
- Article
Achieving Reliable Causal Inference with Data-Mined Variables: A Random Forest Approach to the Measurement Error Problem
By: Mochen Yang, Edward McFowland III, Gordon Burtch and Gediminas Adomavicius
Combining machine learning with econometric analysis is becoming increasingly prevalent in both research and practice. A common empirical strategy involves the application of predictive modeling techniques to "mine" variables of interest from available data, followed...
View Details
Keywords:
Machine Learning;
Econometric Analysis;
Instrumental Variable;
Random Forest;
Causal Inference;
AI and Machine Learning;
Forecasting and Prediction
Yang, Mochen, Edward McFowland III, Gordon Burtch, and Gediminas Adomavicius. "Achieving Reliable Causal Inference with Data-Mined Variables: A Random Forest Approach to the Measurement Error Problem." INFORMS Journal on Data Science 1, no. 2 (October–December 2022): 138–155.
- May 2022 (Revised April 2023)
- Case
LOOP: Driving Change in Auto Insurance Pricing
By: Elie Ofek and Alicia Dadlani
John Henry and Carey Anne Nadeau, co-founders and co-CEOs of LOOP, an insurtech startup based in Austin, Texas, were on a mission to modernize the archaic $250 billion automobile insurance market. They sought to create equitably priced insurance by eliminating pricing...
View Details
Keywords:
AI and Machine Learning;
Technological Innovation;
Equality and Inequality;
Prejudice and Bias;
Growth and Development Strategy;
Customer Relationship Management;
Price;
Insurance Industry;
Financial Services Industry
Ofek, Elie, and Alicia Dadlani. "LOOP: Driving Change in Auto Insurance Pricing." Harvard Business School Case 522-073, May 2022. (Revised April 2023.)
- Article
Eliminating Unintended Bias in Personalized Policies Using Bias-Eliminating Adapted Trees (BEAT)
By: Eva Ascarza and Ayelet Israeli
An inherent risk of algorithmic personalization is disproportionate targeting of individuals from certain groups (or demographic characteristics such as gender or race), even when the decision maker does not intend to discriminate based on those “protected”... View Details
Keywords:
Algorithm Bias;
Personalization;
Targeting;
Generalized Random Forests (GRF);
Discrimination;
Customization and Personalization;
Decision Making;
Fairness;
Mathematical Methods
Ascarza, Eva, and Ayelet Israeli. "Eliminating Unintended Bias in Personalized Policies Using Bias-Eliminating Adapted Trees (BEAT)." e2115126119. Proceedings of the National Academy of Sciences 119, no. 11 (March 8, 2022).
- September–October 2021
- Article
Frontiers: Can an AI Algorithm Mitigate Racial Economic Inequality? An Analysis in the Context of Airbnb
By: Shunyuan Zhang, Nitin Mehta, Param Singh and Kannan Srinivasan
We study the effect of Airbnb’s smart-pricing algorithm on the racial disparity in the daily revenue earned by Airbnb hosts. Our empirical strategy exploits Airbnb’s introduction of the algorithm and its voluntary adoption by hosts as a quasi-natural experiment. Among...
View Details
Keywords:
Smart Pricing;
Pricing Algorithm;
Machine Bias;
Discrimination;
Racial Disparity;
Social Inequality;
Airbnb Revenue;
Revenue;
Race;
Equality and Inequality;
Prejudice and Bias;
Price;
Mathematical Methods;
Accommodations Industry
Zhang, Shunyuan, Nitin Mehta, Param Singh, and Kannan Srinivasan. "Frontiers: Can an AI Algorithm Mitigate Racial Economic Inequality? An Analysis in the Context of Airbnb." Marketing Science 40, no. 5 (September–October 2021): 813–820.
- 2021
- Working Paper
Invisible Primes: Fintech Lending with Alternative Data
By: Marco Di Maggio, Dimuthu Ratnadiwakara and Don Carmichael
We exploit anonymized administrative data provided by a major fintech platform to investigate whether using alternative data to assess borrowers’ creditworthiness results in broader credit access. Comparing actual outcomes of the fintech platform’s model to...
View Details
Keywords:
Fintech Lending;
Alternative Data;
Machine Learning;
Algorithm Bias;
Finance;
Information Technology;
Financing and Loans;
Analytics and Data Science;
Credit
Di Maggio, Marco, Dimuthu Ratnadiwakara, and Don Carmichael. "Invisible Primes: Fintech Lending with Alternative Data." Harvard Business School Working Paper, No. 22-024, October 2021.
- September 17, 2021
- Article
AI Can Help Address Inequity—If Companies Earn Users' Trust
By: Shunyuan Zhang, Kannan Srinivasan, Param Singh and Nitin Mehta
While companies may spend a lot of time testing models before launch, many spend too little time considering how they will work in the wild. In particular, they fail to fully consider how rates of adoption can warp developers’ intent. For instance, Airbnb launched a...
View Details
Keywords:
Artificial Intelligence;
Algorithmic Bias;
Technological Innovation;
Perception;
Diversity;
Equality and Inequality;
Trust;
AI and Machine Learning
Zhang, Shunyuan, Kannan Srinivasan, Param Singh, and Nitin Mehta. "AI Can Help Address Inequity—If Companies Earn Users' Trust." Harvard Business Review Digital Articles (September 17, 2021).
- 2021
- Chapter
Towards a Unified Framework for Fair and Stable Graph Representation Learning
By: Chirag Agarwal, Himabindu Lakkaraju and Marinka Zitnik
As the representations output by Graph Neural Networks (GNNs) are increasingly employed in real-world applications, it becomes important to ensure that these representations are fair and stable. In this work, we establish a key connection between counterfactual...
View Details
Agarwal, Chirag, Himabindu Lakkaraju, and Marinka Zitnik. "Towards a Unified Framework for Fair and Stable Graph Representation Learning." In Proceedings of the 37th Conference on Uncertainty in Artificial Intelligence, edited by Cassio de Campos and Marloes H. Maathuis, 2114–2124. AUAI Press, 2021.
- 2020
- Working Paper
(When) Does Appearance Matter? Evidence from a Randomized Controlled Trial
By: Prithwiraj Choudhury, Tarun Khanna, Christos A. Makridis and Subhradip Sarker
While there is evidence about labor market discrimination based on race, religion, and gender, we know little about whether physical appearance leads to discrimination in labor market outcomes. We deploy a randomized experiment on 1,000 respondents in India between...
View Details
Keywords:
Behavioral Economics;
Coronavirus;
Discrimination;
Homophily;
Labor Market Mobility;
Limited Attention;
Resumes;
Personal Characteristics;
Prejudice and Bias
Choudhury, Prithwiraj, Tarun Khanna, Christos A. Makridis, and Subhradip Sarker. "(When) Does Appearance Matter? Evidence from a Randomized Controlled Trial." Harvard Business School Working Paper, No. 21-038, September 2020.
- August 2020
- Article
Machine Learning and Human Capital Complementarities: Experimental Evidence on Bias Mitigation
By: Prithwiraj Choudhury, Evan Starr and Rajshree Agarwal
The use of machine learning (ML) for productivity in the knowledge economy requires considerations of important biases that may arise from ML predictions. We define a new source of bias related to incompleteness in real time inputs, which may result from strategic...
View Details
Choudhury, Prithwiraj, Evan Starr, and Rajshree Agarwal. "Machine Learning and Human Capital Complementarities: Experimental Evidence on Bias Mitigation." Strategic Management Journal 41, no. 8 (August 2020): 1381–1411.
- March 2019
- Case
Wattpad
By: John Deighton and Leora Kornfeld
How to run a platform to match four million writers of stories to 75 million readers? Use data science. Make money by doing deals with television and filmmakers and book publishers. The case describes the challenges of matching readers to stories and of helping writers...
View Details
Keywords:
Platform Businesses;
Creative Industries;
Publishing;
Data Science;
Machine Learning;
Collaborative Filtering;
Women And Leadership;
Managing Data Scientists;
Big Data;
Recommender Systems;
Digital Platforms;
Information Technology;
Intellectual Property;
Analytics and Data Science;
Publishing Industry;
Entertainment and Recreation Industry;
Canada;
United States;
Philippines;
Viet Nam;
Turkey;
Indonesia;
Brazil
Deighton, John, and Leora Kornfeld. "Wattpad." Harvard Business School Case 919-413, March 2019.
- 2023
- Chapter
Marketing Through the Machine’s Eyes: Image Analytics and Interpretability
By: Shunyuan Zhang, Flora Feng and Kannan Srinivasan
he growth of social media and the sharing economy is generating abundant unstructured image and video data. Computer vision techniques can derive rich insights from unstructured data and can inform recommendations for increasing profits and consumer utility—if only the...
View Details
Zhang, Shunyuan, Flora Feng, and Kannan Srinivasan. "Marketing Through the Machine’s Eyes: Image Analytics and Interpretability." Chap. 8 in Artificial Intelligence in Marketing. 20, edited by Naresh K. Malhotra, K. Sudhir, and Olivier Toubia. Review of Marketing Research. Emerald Publishing Limited, forthcoming.