Publications
Publications
- April 2024 (Revised December 2024)
- HBS Case Collection
Anthropic: Building Safe AI
By: Shikhar Ghosh and Shweta Bagai
Abstract
In late 2024, Anthropic, a leading AI safety and research company, achieved a significant breakthrough with computer use capabilities that allowed AI to interact with computers like humans. Co-founded by former OpenAI employees and known for its generative AI technology Claude, Anthropic had grown rapidly to a potential $30-40 billion valuation—while maintaining its distinctive focus on safe AI development. As a Public Benefit Corporation, the company prioritized public good alongside financial returns, even delaying product releases to ensure appropriate safety protocols—a stark contrast to competitors like OpenAI, whose release of ChatGPT had triggered an AI arms race.
CEO Dario Amodei believed AI could usher in unprecedented improvements in human quality of life—from accelerating scientific discoveries, curing diseases to lifting billions out of poverty. However, he also recognized serious risks, from immediate concerns about misinformation to long-term existential threats. As a company with aggressive growth targets and a 75x revenue multiple, and in light of OpenAI's recent board replacement demonstrating the fragility of safety-focused governance, Amodei faced critical questions: Did Anthropic's corporate structure effectively guard against profit-driven incentives that could compromise safety? As AI models became more powerful, what tools should Anthropic develop and share to prevent harm? How should the company engage with inherently geopolitical issues as AI became increasingly embedded in society?
CEO Dario Amodei believed AI could usher in unprecedented improvements in human quality of life—from accelerating scientific discoveries, curing diseases to lifting billions out of poverty. However, he also recognized serious risks, from immediate concerns about misinformation to long-term existential threats. As a company with aggressive growth targets and a 75x revenue multiple, and in light of OpenAI's recent board replacement demonstrating the fragility of safety-focused governance, Amodei faced critical questions: Did Anthropic's corporate structure effectively guard against profit-driven incentives that could compromise safety? As AI models became more powerful, what tools should Anthropic develop and share to prevent harm? How should the company engage with inherently geopolitical issues as AI became increasingly embedded in society?
Keywords
AI and Machine Learning; Corporate Accountability; Corporate Social Responsibility and Impact; Business Growth and Maturation; Corporate Strategy; Technology Industry; United States
Citation
Ghosh, Shikhar, and Shweta Bagai. "Anthropic: Building Safe AI." Harvard Business School Case 824-129, April 2024. (Revised December 2024.)