Publications
Publications
- 2025
- HBS Working Paper Series
Narrative AI and the Human-AI Oversight Paradox in Evaluating Early-Stage Innovations
By: Jacqueline N. Lane, Léonard Boussioux, Charles Ayoubi, Ying Hao Chen, Camila Lin, Rebecca Spens, Pooja Wagh and Pei-Hsin Wang
Abstract
Do AI-generated narrative explanations enhance human oversight or diminish it? We investigate this question through a field experiment with 228 evaluators screening 48 early-stage innovations under three conditions: human-only, black-box AI recommendations without explanations, and narrative AI with explanatory rationales. Across 3,002 screening decisions, we uncover a human-AI oversight paradox: under the high cognitive load of rapid innovation screening, AI-generated explanations increase reliance on AI recommendations rather than strengthening human judgment, potentially reducing meaningful human oversight. Screeners assisted by AI were 19 percentage points more likely to align with AI recommendations, an effect that was strongest when the AI advised rejection. Considering in-depth expert evaluations of the solutions, we find that while both AI conditions outperformed human-only screening, narrative AI showed no quality improvements over black-box recommendations despite higher compliance rates and may actually increase rejection of high-potential solutions. These findings reveal a fundamental tension: AI assistance improves overall screening efficiency and quality, but narrative persuasiveness may inadvertently filter out transformative innovations that deviate from standard evaluation frameworks.
Keywords
Citation
Lane, Jacqueline N., Léonard Boussioux, Charles Ayoubi, Ying Hao Chen, Camila Lin, Rebecca Spens, Pooja Wagh, and Pei-Hsin Wang. "Narrative AI and the Human-AI Oversight Paradox in Evaluating Early-Stage Innovations." Harvard Business School Working Paper, No. 25-001, August 2024. (Revised May 2025.)