Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
Publications
Publications
  • May 2022
  • Case
  • HBS Case Collection

Timnit Gebru: 'SILENCED No More' on AI Bias and The Harms of Large Language Models

By: Tsedal Neeley and Stefani Ruper
  • Format:Print
  • | Language:English
  • | Pages:21
ShareBar

Abstract

Dr. Timnit Gebru—a leading artificial intelligence (AI) computer scientist and co-lead of Google’s Ethical AI team—was messaging with one of her colleagues when she saw the words: “Did you resign?? Megan sent an email saying that she accepted your resignation.” Heart rate spiking, Gebru was shocked to find that her company account had been cut off. She scrolled through her personal inbox to find an email stating that the company could not agree to the conditions she had stipulated about a research paper critiquing large language models and also expressing disapproval of a message she had sent to an internal listserv about halting diversity, equity, and inclusion (DEI) efforts without accountability. Therefore, Google was accepting Gebru’s “resignation,” effective immediately. Gebru who hadn’t submitted a formal resignation realized she had been fired. Gebru had been concerned that large language models were racing ahead with little appraisal of their potential risks and debiasing strategies. Her ousting sent shockwaves through the AI and tech community. Thousands of people signed a petition against what they characterized as unprecedented research censorship. Nine members of congress would write the CEO of the company—Sundar Pichai—questioning his commitment to Ethical AI. The outspoken Gebru’s experience raises fundamental questions about countering AI bias. Could tech companies lead the way with in-house AI ethics research? Should that type of work reside with more objective actors outside of companies? On the other hand, shouldn’t those who best understand the technology at play be the ones to investigate the bias or ethical challenges that might creep up? The answers to these questions remain central to the exponentially growing AI domain that companies have to consider.

Keywords

Ethics; Employment; Corporate Social Responsibility and Impact; Technological Innovation

Citation

Neeley, Tsedal, and Stefani Ruper. "Timnit Gebru: 'SILENCED No More' on AI Bias and The Harms of Large Language Models." Harvard Business School Case 422-085, May 2022.
  • Educators
  • Purchase

About The Author

Tsedal Neeley

Organizational Behavior
→More Publications

More from the Authors

    • February 2023
    • Faculty Research

    Nexus Market (B): After the Ultimatum

    By: Tsedal Neeley and Jeff Huizinga
    • February 2023
    • Faculty Research

    Nexus Market (A): Ukraine War Ripples into Silicon Valley

    By: Tsedal Neeley and Jeff Huizinga
    • Harvard Business Review

    Developing a Digital Mindset: How to Lead Your Organization into the Age of Data, Algorithms, and AI

    By: Tsedal Neeley and Paul Leonardi
More from the Authors
  • Nexus Market (B): After the Ultimatum By: Tsedal Neeley and Jeff Huizinga
  • Nexus Market (A): Ukraine War Ripples into Silicon Valley By: Tsedal Neeley and Jeff Huizinga
  • Developing a Digital Mindset: How to Lead Your Organization into the Age of Data, Algorithms, and AI By: Tsedal Neeley and Paul Leonardi
ǁ
Campus Map
Harvard Business School
Soldiers Field
Boston, MA 02163
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Accessibility
  • Digital Accessibility
Copyright © President & Fellows of Harvard College