Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
Publications
Publications
  • 2024
  • Working Paper

Warnings and Endorsements: Improving Human-AI Collaboration Under Covariate Shift

By: Matthew DosSantos DiSorbo and Kris Ferreira
  • Format:Print
  • | Language:English
  • | Pages:49
ShareBar

Abstract

Problem definition: While artificial intelligence (AI) algorithms may perform well on data that are representative of the training set (inliers), they may err when extrapolating on non-representative data (outliers). These outliers often originate from covariate shift, where the joint distribution of input features changes from the training set to deployment. How can humans and algorithms work together to make better decisions when faced with outliers and inliers? Methodology/results: We study a human-AI collaboration on prediction tasks using an anchor-and-adjust framework, and hypothesize that humans are biased towards na¨ıve adjustment behavior: making adjustments to algorithmic predictions that are too similar across inliers and outliers, when ideally adjustments should be larger on outliers than inliers. In an online lab experiment, we demonstrate that participants are indeed unable to sufficiently differentiate absolute adjustments to an AI algorithm when faced with both inliers and outliers, leading to a 143-176% increase in their absolute deviation from the optimal prediction compared to participants who only face either all inliers or all outliers. We design a ‘warning’ that alerts participants when feature values constitute outliers and, in a second experiment, we show that this warning helps participants differentiate adjustments, ultimately reducing their absolute deviation from the optimal prediction by an average of 31% on outliers and 35% on inliers. We demonstrate that an additional intervention — ‘endorsements’ that alert participants when feature values constitute inliers — reduces participants’ absolute deviation from the optimal prediction on inliers by an additional 34% on average. Managerial implications: Our work uncovers a behavioral bias towards na¨ıve adjustment behavior, and identifies a simple, educational intervention that mitigates this bias. Ultimately, we hope that this work will help managers best equip their employees with the knowledge they need to succeed in a human-AI collaboration.

Keywords

AI and Machine Learning; Decision Choices and Conditions

Citation

DosSantos DiSorbo, Matthew, and Kris Ferreira. "Warnings and Endorsements: Improving Human-AI Collaboration Under Covariate Shift." Working Paper, February 2024.
  • Read Now

About The Author

Kris Johnson Ferreira

Technology and Operations Management
→More Publications

More from the Authors

    • October 2023 (Revised June 2024)
    • Faculty Research

    ReUp Education: Can AI Help Learners Return to College?

    By: Kris Ferreira, Christopher Thomas Ryan and Sarah Mehta
    • July–August 2023
    • Manufacturing & Service Operations Management

    Demand Learning and Pricing for Varying Assortments

    By: Kris Ferreira and Emily Mower
    • March–April 2023
    • Manufacturing & Service Operations Management

    Market Segmentation Trees

    By: Ali Aouad, Adam Elmachtoub, Kris J. Ferreira and Ryan McNellis
More from the Authors
  • ReUp Education: Can AI Help Learners Return to College? By: Kris Ferreira, Christopher Thomas Ryan and Sarah Mehta
  • Demand Learning and Pricing for Varying Assortments By: Kris Ferreira and Emily Mower
  • Market Segmentation Trees By: Ali Aouad, Adam Elmachtoub, Kris J. Ferreira and Ryan McNellis
ǁ
Campus Map
Harvard Business School
Soldiers Field
Boston, MA 02163
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Accessibility
  • Digital Accessibility
Copyright © President & Fellows of Harvard College.