BiGS Actionable Intelligence:
BOSTON—Sam Altman, OpenAI’s co-founder and CEO and a leading voice in artificial intelligence, on May 1 spent a day at Harvard University speaking to students, faculty, and business leaders about the role of AI in today’s world and what executives, educators, regulators, and everyday workers who might lose their jobs can expect in the months and years ahead.
Altman was on campus to receive the 2024 Xfund Experiment Cup, which is awarded by Xfund to company founders. The partnership between venture capital firms and research universities that back founders who think across disciplines was Altman’s first investor.
During his Harvard visit, Altman had discussions with Harvard Business School senior associate dean Debora Spar, Barbara DeLollis; head of communications at Harvard Business School's Institute for Business in Global Society (BiGS); Patrick Chung, Xfund’s managing general partner; and others from the university community.
To collect Altman’s comments, The BiGS Fix boiled down his discussions at several events. The following Q&A is not all 120 minutes of Altman’s meetings—that would be very long—but is instead just the highlights, synthesized from multiple interviews. All material has been edited for length, clarity, and style.
Q. What excited you about AI as an undergraduate?
A. I really like things that, if they work, really matter—even if they don’t have a super-high chance of working. So, it seemed like if AI could work, it would be the coolest, most important, most exciting thing. And so, it was worth pursuing. The expected value was high, even if the chances were low.
Q. If you could go back to the 19-year-old Sam Altman, what would you tell him?
A. I think that you can just ‘do stuff’ in the world, and this is not well taught. I certainly did not know it when I was 19. The way that progress happens is people just work really hard, decide what they have conviction in and dedicate themselves to that. That is the only way that things happen, and the world gets better. You don’t need to wait. You don’t need to get permission. You can—even if you’re totally unknown in the world, with almost no resources—you can still accomplish an amazing amount.
Attacking problems with persistence
Q. How did you get so much done when you were unknown?
A. One of the things that [Y Combinator Co-founder] Paul Graham used to say, that never became venerated advice to the degree it should have, is the idea that you should try to be relentlessly resourceful. Surprisingly often, if you just keep looking for attack vectors on a problem in front of you, you can figure it out. I think this is one of the most important skills in life. It’s surprisingly learnable or teachable, and it works in almost all scenarios.
Q. Can you give us an example of a time it worked?
[Note: Altman is referencing a period before OpenAI] We needed to figure out how to get a deal done with a mobile operator and they didn’t really work with startups or technology companies in general. We probably tried 30 different paths into this company. At one point, the key decision maker said, ‘I’m finally going to meet with you because I want you to stop bothering us.’ You can keep doing that until something works. Most people, at the first ignored email or the first path where you don’t find the right person in a company, or at least the second, would just stop. Butit was a life-and-death thing for our company, so we were very motivated.
Q. How do you know when to quit?
I think there’s definitely a balance. You can clearly take it too far and not learn or not adapt. But I never figured out the question that, when I was running [Y Combinator] startups always ask, which is ‘how do I know when to give up on my startup?’ I spent a lot of time trying to come up with a rubric for it. I never did. I think all of these things are judgment calls. … It's hard to say, ‘here's the one recipe that always works.’
Q. When you started OpenAI, why did you make it a nonprofit?
A. If you start a for-profit, you have to theoretically have some idea of what you're going to do to be a moneymaking entity someday. Our initial conception of OpenAI was, let's just be a research lab and see if we can get anything at all to work.
It took a long time to get anything at all to work. And then the stuff that we did get to work for a while, other than good research that pointed us to the next step on the path, had nothing to do with what we do now. We made something that could play [the video game] Dota 2, but that's very hard to build a business around. We made a robot hand that could barely do a Rubik's Cube. But eventually, after a lot of stumbling in the dark, we did figure out something that turned out to be a product and a real business.
We also found that … you don't get to choose where the science goes. You just have to follow it. And where we had to follow it turned out to require gigantic amounts of resources to keep pushing progress, so we needed a business model to fit that.
Impact on jobs and the workforce
Q. What do you say to workers right now who fear that they may lose their jobs in the next five years?
A. They may … although I think that AI will not eliminate all the jobs in the way most people think. … In every technological revolution, people predict the end of jobs, and it never happens. I don't think will happen this time, but they will change. In most jobs, people will use AI tools to do [their jobs] better and faster. But some jobs will totally go away. New ones will get created. The shape of jobs will change.
Decades in the future, who knows? I think what those jobs look like may be very different than what you or I would consider a real job today. I think it is important to be upfront and honest with people that we expect this to happen—jobs to change, some jobs to go away, new jobs to get created—and also work with our leaders to figure out the social contract … and how that's got to evolve given this level of change.
The future of AI
Q. Can you tell us about a difficult product decision that you've had to make?
Our product decisions are downstream of the research decisions, and the research directions that we choose to pursue and not pursue are probably the most difficult and most important. On the product side, the behavior of ChatGPT, what it refuses, what it doesn't refuse, and where the limits are of what it will do for you—how we figure out where to set the alignment—those are probably the hardest product calls.
Q. Can you give us an example?
Should ChatGPT give legal advice or not? There are huge reasons not to do it. And obviously, with the hallucination or general inaccuracy issues with ChatGPT, it seems very reasonable to say it shouldn't do it. On the other hand, there are a lot of people in the world who can't afford legal advice. And if you can make it even imperfectly available, maybe it's better than not.
Q. Take us through the thought process and the decision you ended up with.
A. It mostly won’t right now. You can get it to do it in some cases, in some ways. … I think users are pretty smart and as long as you disclaim things and explain them properly, I think people can make adult decisions. What I'd like to get to is a world where we don't do things that have high potential for inaccuracies leading to misuse. But as the model gets better that those are less common, we can give you a dial and you can say, ‘I really understand I've got to check this advice and it's very much on me if I don't.’ This is not like clicking through terms of service that no one reads, people are really going to understand it. And then if you want to do it, we find some safe way to do that.
Q. What are the pieces of AI that you're most excited about?
A. Personally, I think greatly increasing the rate of scientific discovery is what I'm most excited about. I believe that if you zoom all the way out, that is how the world gets sustainably better. And I think doing more of that is awesome. There are a lot of other areas too. I think there'll be incredible AI tutors, incredible AI medical advisors. But personally speaking, I'm so excited about AI for science.
Q. What will we see ahead at OpenAI?
A. There’s someone, somewhere in OpenAI right now, making some phenomenally important discovery—I don’t know what it is, I just know it’s going to happen statistically—that may very much shape the future. I totally agree, on the surface, that we should feel tremendous responsibility and get big decisions right—but you don’t always know when that’s coming.
Regulation and policy
Q. What boundaries should be set on AI tools?
Really what I think is OpenAI should not be making those determinations. There should be a process by which society collectively negotiates. ‘here's how we're going to use this technology.’ The rules should be uncomfortably permissive, but the defaults don't have to be so permissive. I think it's fine to say, ‘the default is here, a user can customize within these very broad boundaries that society has agreed with, and most people won't like the edge of those bounds.’ But there are still some boundaries and there are still some things, particularly as these models get way more powerful, that we're not going to allow. And then within those bounds, the goal of the tool is to serve its user. And I think that's okay.
Q. Do you think that the government could have done OpenAI as well as you have?
A. In a well-functioning society, I think this would be a government project. But that's a big ‘if.’ Given that that's not happening, I think it's better that it happens this way and [actually] happens.
Q. You were appointed by the U.S. Department of Homeland Security to participate in the first-ever Artificial Intelligence Safety and Security Board. What did you choose to participate?
A. I think one of the most important things to figure out with AI is how government can help play a role with all of us, and the progress we expect over the next couple of years makes that more important and urgent. This seems like an interesting first step to get people from industry talking with people in government who want to figure out how to use this technology and regulate it.
I think its exciting that this is happening in private industry, but in a different time or a different configuration of the world, it would have been happening in the government. Given that it’s not, I think a very close partnership is critical.
Q. Do you think the government has the capability to understand all of this? No—or not yet, which is why I think doing things like this is really important. I think the difference between what we in the industry expect to happen in the next couple of years versus what the government as a whole … believes is going to happen, there’s way too much distance between those. One of us is wrong. And it would be good to get to a shared understanding.
Q. What are some considerations when it comes to writing AI policy?
A. First of all, I think policy has worked best when it's downstream of the science. There are all these things that sound like great policy ideas in theory. But if the technology goes in a super different way, you might need a very different set of policy tools.
There are a bunch of things you could do. One idea that I find quite compelling is something like an [International Atomic Energy Agency] for advanced AI Systems. If you have one of the 10 systems in the world that is over this threshold, you're going to have the equivalent of weapons inspectors. We would focus the international policy on the catastrophic risk that affects all of us and different countries get to set their things.
I think that’s a great idea—and it’s workable, if the technology goes like we are thinking it’s going to go.