Professor: Society must rethink income distribution in age of AI

Professor: Society must rethink income distribution in age of AI

Editor's note: The Institute for Business in Global Society (BiGS) invited economist Anton Korinek to speak about the impact of artificial intelligence on the economy during our AI in Society seminar series. Korinek is a professor at the University of Virginia's Department of Economics and Darden School of Business. He is an editor of The Oxford Handbook of AI Governance and has researched AI's effects on labor markets. Barbara DeLollis, former head of communications at BiGS, spoke with Korinek about AI's potential impact on the economy, how business leaders can adjust plans as the technology rapidly evolves, and how society can work to mitigate AI's risks. This interview has been edited for length, clarity, and style.

When you're talking with business leaders, how do you describe AI's impact on our economy?

Right now, we actually only see a very small impact. AI is not yet visible in the productivity statistics or macroeconomic variables. But we are expecting the impact to be really massive within the next couple of years. Businesses across the country and the world have been investing massively in AI and incorporating AI into their processes. So far, some of them have seen small payoffs, but I think the biggest payoffs are yet to come.

Are we nearing the point where AI matches human intelligence?

In a lot of domains, I think we have already crossed that point. In some sense, AI systems are better than most humans at performing math. They are much better at analyzing large quantities of text. They are much better in a growing number of domains. Right now, I think it is clear that AI is nowhere near as good as the best human experts in specific areas. But it is getting better really fast.

How do you advise business leaders who aren't used to planning around such short horizons?

I think of the great saying by [President Dwight D.] Eisenhower: "Plans are useless, but planning is indispensable." In some sense, AI systems are improving so rapidly that it's completely unpredictable what the world will look like in a couple of years. In five years, we may have artificial general intelligence [AGI], where AI systems are better than humans, or artificial super intelligence, where AI systems are far beyond our human intellect.

It's almost impossible to imagine the world under such scenarios. I think ultimately the best plan is to make sure you're constantly up to date with what's happening in AI and update the plans you have been making.

How do we prevent these technological advancements from benefiting only a few, while leaving many behind?

I think from an economic perspective, that's going to be the main challenge in the age of AI.

I anticipate our current system of income distribution, with people receiving most of their income from work or a pension, is just not going to work anymore after we have AGI. AGI would by definition be able to do essentially anything a human worker could do. And that means human workers would be easily substitutable by AI.

So, I think we need to fundamentally rethink our systems of income distribution. We need something like a universal basic income, however exactly we structure it, to make sure that when AI systems become better than humans at most cognitive tasks, and our economy can suddenly produce so much more, that humans can also share in those gains — that it doesn't immiserate the masses.

Is a universal basic income a radical idea?

It's absolutely a radical idea. And I think at this very moment we don't need or want

something like a universal basic income, because it's hugely expensive and it would provide disincentives to work for a lot of people. Our economy relies on labor and we want people who are able to do so to contribute to the economy. But if we do reach AGI, that in itself would be an absolutely radical development on the economic front. And that kind of radical development would require a radical response.

When you're having this conversation with business and political leaders, what response are you hearing?

Two years ago, I could tell people were not taking this seriously. I could tell people were like, “Oh yeah, that's some weird sci-fi scenario.” In the past half year, and the past couple of months especially, more and more are taking this very seriously. They see how AI is moving rapidly and able to produce output that was just unimaginable a year ago or two years ago. And if you follow that trajectory, then I think you can see that it's just a question of time when AI will reach this level of AGI. And whenever that happens, the economic, social, and political implications are going to be severe.

What dangers do you see if countries don't collaborate on AI governance?

Right now, we don't have a lot of global cooperation. We're in a race between the AI superpowers about who makes progress faster. And I don't think AI systems are particularly dangerous yet, but I think as they get better, it would be in the interest of all to establish common safety standards and make sure that this technology does not get out of hand.

Nobody in the world wants this technology to create massive risks for humanity as a whole. When we have systems capable enough to create those risks, we will need a global governance framework for how we mitigate those risks, just like we have with dangerous technologies like nuclear weapons.

HOW TO ENGAGE WITH HBS BiGS

Actionable insights in your inbox

Get the latest from BiGS