Publications
Publications
- Forthcoming
- New England Journal of Medicine AI
Disclosure, Humanizing, and Contextual Vulnerability of Generative AI Chatbots
By: Julian De Freitas and I. Glenn Cohen
Abstract
In the wake of recent advancements in generative AI, regulatory bodies are trying to keep pace. One key decision is whether to require app makers to disclose the use of generative AI-powered chatbots in their products. We suggest that some generative AI-based chatbots lead consumers to use them in unintended ways that create mental health risks, making them contextually vulnerable—defined as a temporary state of susceptibility to harm or other adverse mental health effects arising from the interplay between a user’s interactions with a particular system and the system’s response. We argue that for health apps—including “medical devices” and “wellness apps”— disclosure should be mandated. We also show how, even when chatbots are disclosed in these instances, they may still carry risks because of the tendency of app makers to humanize their chatbots. The current regulatory structure does not fully address these challenges. We discuss how app makers and regulators should proactively address this challenge by considering where apps fall along the continuum of perceived humanness, and in spaces connected to health needs either mandating or strongly recommending neutral (non-humanized) chatbots be the default and that deviations from that default be justified.
Keywords
AI and Machine Learning; Governing Rules, Regulations, and Reforms; Applications and Software; Well-being
Citation
De Freitas, Julian, and I. Glenn Cohen. "Disclosure, Humanizing, and Contextual Vulnerability of Generative AI Chatbots." New England Journal of Medicine AI (forthcoming).