MIT’s Daron Acemoglu to Business Leaders: Unchecked Power of Big Tech Poses Risks as AI Reshapes Society

MIT Professor Daron Acemoglu giving a presentation at HBS

BiGS Actionable Intelligence:

Video duration:6:51

Editor's note: This exclusive interview with Institute Professor Daron Acemoglu of MIT marks the first in our six-part series exploring the crucial intersection of technology and inequality based on Harvard Business School's invitation-only summit Inequality in the Digital Age. On March 8, 2024, Harvard Business School Associate Professor Michael Luca and the Race, Gender, Equity Initiative convened economists, psychologists and other scholars to explore how the latest technology boom is rapidly reshaping today's world. As part of our ongoing commitment to sharing world-class, evidence-based insights with you, BiGS Editor-in-Chief Barbara DeLollis engaged in candid conversations with six distinguished scholars on the sidelines. We've edited the transcript for length, clarity and style. Stay tuned for more and be sure to join our growing LinkedIn community!

Q Thanks for sitting down with us to discuss your latest research and book, Power and Progress. In the book, you and co-author Simon Johnson explore how previous technological revolutions like the windmill and cotton gin impacted wealth and information distribution. Can these types of historical examples indicate what we’re about to experience with the artificial intelligence (AI) boom?

A. There's a lot we can learn. In fact, the British Industrial Revolution—where our prosperity and great technological improvements originated—provides ample evidence on how sometimes things went very wrong. Poverty and inequality were created out of new technological breakthroughs and sometimes new technologies, together with new institutions, became the basis of shared prosperity.

AI is completely unique. Its effects on the economy are going to be unique. On the other hand, we have so much knowledge from history: about how similar breakthroughs in technology have worked out, under what conditions they have been foundations for shared prosperity, when they have been tools in the hands of a narrow elite to exploit the rest of the population.

Q. What risks do technologies such as AI pose for workers?

A. The potential of machine learning and other digital technologies depends on how we conceive and utilize them. So, it's wrong to ask the question, what will it do to labor? The right question is “what will we decide to do with AI and how will this impact inequality?” There is tremendous potential to use AI for good, but that's not the path we're going on. On our current path, I think AI and [machine learning] will be used for monitoring workers, reducing their autonomy. [These technologies] will be used for automation, sidelining labor, and perhaps disempowering the already weaker members of our society.

Q. Can we trust the market to ensure that AI doesn't lead to these outcomes and exacerbate inequality?

A. No, absolutely not. I am a big believer in the market process. I don't think there's any alternative but the market to bring more innovation and better prosperity. Central planning is not going to be our future. But that doesn't mean that the market is going to get everything right. And one of the things that it doesn't get right is exactly distributional issues and issues related to the direction of technology.

If we “leave it to the market,” whether we're going to use AI for automating work, monitoring workers, or creating new tasks, it's not the market that will have a say. It's Google, Facebook, Amazon, Microsoft, and OpenAI. That's not the market. We already live in a world in which there are very, very big corporations that have an enormous influence on our society, on the future of work.

And they're not the market. They have their own ideology, they have their own priorities, they have their own profit motives, they have their own business models. I don't think we can just think that these companies necessarily represent the interest of the market. Nor that there's a miraculous market that's going to solve all problems.

Q. Are we already seeing workers being disempowered through the use of these technologies?

A. We're definitely seeing that today. Workplaces have much more monitoring, much better control over workers. To some degree, better information is good. But then the question is, who controls that better information? What do you do with it? And if you look at over the last 40 years, we see many digital technologies deepening inequality because they automated work and displaced workers and they weren't used for creating new jobs for the same workers who were displaced from their jobs. So, I think those are the pitfalls that we should try to avoid with AI. And I think, again, we are not likely to avoid them unless we do a course correction.

Q. Are you optimistic that we're starting to see a course correction?

A. I wouldn't say I am optimistic, but I would say I have a hope that we have learned a lot. First of all, I think today we have a more open conversation about how we can use new technologies for workers. Labor unions have become more awake to the issues of digital technologies and AI, and I think there's a broader openness in the U.S. society that was absent just four years ago about regulation and reducing the power of big tech. I think all of those are useful developments.

Q. You mentioned before we started this interview that you have received some interesting feedback from Big Tech leaders about your book. What have you heard?

A. I think some of the most interesting people today work in the tech sector. There is a very diverse set of perspectives. And I would say when people hear me talk, my book, my general arguments from the tech industry, I get a love-and-hate message. There are some people who think [that] questioning what the tech industry is doing is sacrilege — that we are killing the golden goose that's going to deliver prosperity to the United States.

And other people are clearly worried about what their industry, their own firms are doing. And they're trying to develop a framework to make sense of it. And that's what I'm hoping that we contribute to.

Q. Do you believe that some of the labor activity we saw in 2023, such as the SAG-AFTRA and Hollywood writers union strikes helped spark some of that conversation?

A. It may have — that's harder for me to say. But the bigger thing I would say about labor is that leaders of the labor movement have realized that AI is here to stay, and they are investing in understanding AI and trying to develop an agenda of how to use it better. I think that's very important.

Q. You've said that in 10 years, we may see that AI isn't all it's cracked up to be compared with humans. Would you want to share any predictions?

A. I think it's impossible to predict what these amazingly impressive technologies are going to bring, but I would venture the following guesses. I think we're going to see a lot of investment in AI and related technologies. And I think that they are going to be rolled out very quickly in some businesses and there's going to be a lot of disappointment.

Q. What do you mean by a lot of disappointment?

A. There's going to be a lot of disappointment because, in the middle of the hype, many businesses will try to implement the technologies too fast. And in many cases, I think they're going to underappreciate what humans were doing and overestimate what AI can do, which is going to lead to disappointing results.

That doesn't mean that we're not going to see some impressive applications. We will. But I think on the whole, we're not going to see the sort of amazing productivity promises become realized.

Q. What advice would you offer business leaders as they make big bets on AI?

A. When you talk about inequality and worker control, people may get the impression that it's just businesses versus workers. We live in a society in which there is conflict — it's inevitable. But I also think that there's a path both in the tech sector and in the broader corporate sector where businesses can flourish while doing things that are good for their workers.

I would say a key distinction is whether you see your workers as a cost or as a resource. If they are a cost, you're going to try to cut those costs. If they are a resource, which I think they are, then businesses can try to flourish while increasing the productivity of their businesses, providing better tools for their workers, improving their training, their skills.

Q. I'd be remiss if I didn't ask about a key focus in your book information and democracy especially as the U.S. presidential election cycle heats up. Should American voters be worried?

A. You know, we've spoken a lot about inequality, about labor. But if you look at the book that Simon and I wrote, a large chunk of it is devoted to democracy because every political regime depends on information. Every political regime depends on some degree of who controls information, how information is presented, who manipulate whom, what agendas are encouraged.

And AI is changing that completely and is changing that again in a very lopsided way because AI is very much concentrated in the hands of a few people and a few companies. And that raises a big threat to democracy.

HOW TO ENGAGE WITH HBS BiGS

Actionable insights in your inbox

Get the latest from BiGS