Podcast
Podcast
- 08 May 2019
- Managing the Future of Work
Prediction: How AI will affect business, work, and life
Bill Kerr: It would’ve been impossible to imagine in 1980 how computers would transform business and our lives. The changes that artificial intelligence will bring are similarly difficult to predict. What exactly will AI do? Which tasks will it take on? And how will it reshape businesses and change jobs? In their 2018 best-seller Prediction Machines, co-authors Ajay Agrawal, Joshua Gans, and Avi Goldfarb address these questions and many more.
Welcome to the Managing the Future of Work podcast from Harvard Business School. I’m your host, Bill Kerr. Today I’m speaking with one of the co-authors, Joshua Gans, a professor at the University of Toronto. We’ll discuss the insights that they’ve framed about AI’s capabilities and how to unpack what AI might do to our economy. Welcome, Joshua.
Joshua Gans: Thanks, Bill.
Kerr: Joshua, why tie artificial intelligence to prediction?
Gans: Well, it’s not so much tying as the recent advances in artificial intelligence are really advances in our ability to predict, and predict in a statistical sense. So we call them “prediction machines” because that’s precisely what’s going on. We have computer-generated software that now can handle prediction at a much better quality and much cheaper than before.
Kerr: And were there any alternative framings that you guys considered before you settled down on prediction?
Gans: No, not really. There have always been broad discussions of artificial intelligence—will be truly artificial intelligence that can read and be creative, et cetera? But machine learning is solidly about prediction. It’s solidly about taking the information that you have and turning it into the information that you need.
Kerr: Looking at a narrow AI vs. the artificial general intelligence that some people talk about in 2050 that may occur.
Gans: Yeah, well the artificial general intelligence, that’s the exciting stuff. That’s the stuff you can make movies about, that’s the stuff that keeps people awake in classes—artificial intelligence producing paperclips that end up destroying the world, that’s excellent stuff, and I liked that as well. But sadly, that’s not what we currently have. But here we are talking about—you and I are economists, so we’d be talking about a mere advance in statistics. That doesn’t quite grab the same headlines.
Kerr: You have a number of example applications in your book, and one of them is MBA admissions. Maybe talk us through how this could be used in MBA admissions and what we can learn about that.
Gans: So, MBA admissions is a classic prediction problem. What you’re doing is you’re taking applications, past academic records, maybe references, maybe even interviews, and you’re using that to predict someone’s success in the MBA program. So that is an activity that is a fundamental predictive activity. We currently have, most places I assume, very human-driven approaches to that. And in some institutions, I’m sure yours, there may be many thousands of applications. And so how do you sort them out? How do you analyze them without things like bias? How do you find the nuggets? We know that some very successful MBA students are people who didn’t succeed as well academically in the past. How do you find those?
The alternative is to digitize a lot of that information and set up a training machine learning algorithm to pop that out and, for instance, to return a score: Jane Doe has an 80 percent chance of being successful in the program. Of course, that raises the question of what we mean by success.
Kerr: Yes, take that next step. And how would you be defining a good outcome from this program?
Gans: Well, that’s very difficult. One thing, there are different levels of criteria. First, you want to find MBA students who can handle the program and complete it. That is an issue in higher education everywhere, so that would be a thing you would want a probability for. Then you’d also be looking for the MBA students who are going to perform well academically in the program. Then you might be looking at the MBA students, because this is an entire cohort, who would work well with others. Well, that’s getting more interesting. How do you do that? And then you’re looking for the MBA students who, having gone through the program, will place at a higher-paying job, so they earn a rate of return on their MBA.
Kerr: And that helps all of our rankings in US News and World Report.
Gans: Exactly. We can see how that feeds into the school. So those are the sorts of criteria. But if you step back, you could say, “Well, maybe what I’m really interested in is producing MBA students to become so successful that they are the CEOs of the future. That enhances the value of our school and the value of the program.” Or they’re so successful that they become very rich entrepreneurs who end up giving back to the school as donations. And we don’t actually know what those criteria are. My guess is, if tomorrow you want to install an automated way of predicting admission to the MBA program, we would start with more-limited goals. But you can see automatically you might be doing the wrong thing by that, by your long-term future, your brand, your school, et cetera.
Kerr: And it will certainly involve a long conversation about what it is that we’re trying to maximize with this prediction.
Gans: A long conversation, not a bad conversation, either. It’s something that can sharpen the mind in thinking about what you really want by going through the exercise of even this little part of it.
Kerr: But this would be an example of sharpening up the admissions process. But we haven’t really redesigned what the university is doing. You also have examples of predictions affecting the business models that companies have. Tell us how that could feed into the overall business structure.
Gans: Well, there’s a few examples we have. But let me continue on with the university example, and to think about, okay, yeah, we could predict admission into the program. But, wait a second. Let’s step back and say, is that the real big uncertainty here? That’s an uncertainty from us from an operational point of view. And we’ve crafted our ways around that. But there’s some things that we aren’t predicting at all. For instance, if you think about the MBA, one story of it is: You do an MBA so you will transform your career. Which way are you going to transform your career? Essentially, what you’re trying to do is take individuals, work out how to put them through an MBA, and match them with a career that is better. And then you have to have a discussion of what is better. A successful career? Well-being? Who knows what? Now, that’s the uncertainty that actually the MBA student is trying to resolve by coming to your program. Well, can you actually think about, what if we had an AI that was trained on measuring those things and over the cohort of HBS alumni, et cetera, and was trained on that. And so we can admit and do things in the program that enhance that entire value. All of a sudden our entire modus of what we’re doing as a business school changes and really starts to center around those goals. And we explicitly can say to the world, “Yes, we are a career-transforming thing, and in a certain way,” as opposed to what we currently do now, which is let people in, hope for the best, and out they go. But we could design things around that. So that could change stuff that way.
So that’s how you could sort of move up from prediction, handling mundane problems that we already know about, to real uncertainties to innovation and a change in their whole approach to the organization.
Kerr: This connects into a concept you have in the book called “satisficing.” Tell us why an airport waiting lounge is satisficing. What does that mean?
Gans: Okay, so an airport waiting lounge, why is it there?
Kerr: Free food? No, no!
Gans: No. To take a term from your colleague Clay Christensen, what is its “job to be done”? And its job to be done is to provide you a place to wait, maybe eat, maybe get coffee and do some work sometimes because you’re at the airport too early. Why are you at the airport too early? You’re at the airport too early because we know that sometimes it’s hard to get through security, sometimes traffic is bad, sometimes your flights aren’t delayed. And so you have a buffer there. And so airport lounges are there to basically encourage people, “Yeah, get to the airport early. It won’t be so bad. It’s not going to be so costly.” But if Google or something else came up with a perfect predictor of when you should leave home to get on the flight without having to wait in the airport, you wouldn’t stop at all. What’s the airport lounge there for? So airport lounges are there as a stop gap because we have to deal with uncertainty. They provide insurance. We end up with rules of thumb: You must leave two hours before. When you have a solid prediction, we leave when the prediction says so, and that could be two hours one day, it could be one hour the next day. Who knows? You don’t have to necessarily care about it.
Kerr: It will take care of that. And to kind of start pulling off this application of prediction, understanding the uncertainty to be addressed and so forth, you imagine an army of reward function engineers coming to exist, a new occupation. Tell us what would be the job of the reward function engineer?
Gans: I don’t want to say it’s a job that people would have as a new classification necessarily, but what we have is a situation where, as we open up new decisions where we’ve been previously following rules and allowed to do a more contingent or nuanced decision—something that is closer from “satisficing,” as it’s called by Herbert Simon, to “optimizing”—we have to determine what you’re optimizing. So one of the things that we do when we set a rule is we don’t have to think about it anymore, and we don’t have to think about why we did it. We just do it that way. But when you want to move to something else, you have to think about why. When you’ve got a weather prediction, you’d better think about, “Why do I need that weather prediction? Oh, I know. I don’t have to carry an umbrella around with me all the time. I can carry it on days that it’s likely to rain and not on other days.” Those are the sorts of decisions you can make. So a reward function engineer is going to be someone who, in business environments, can then unpack that decision and work out the costs of different errors and mistakes are so you can put a value on it, and a value on what prediction is going to get you as a return. Even with an umbrella, you have to, in order to decide whether it’s worthwhile—knowing whether to carry an umbrella today or not—you have to work out, “What’s the cost if I get it wrong? How wet will I get, and how upset will I be? And what’s the cost if I get it wrong the other way? I’m carrying an umbrella for no good reason.”
Kerr: Yeah. The umbrella is obviously a very low-stakes environment. But one can imagine with many applications of artificial intelligence, it becomes quite sensitive.
Gans: Yes.
Kerr: Do you anticipate there being state guidelines—you know, credentialing other aspects around this task of how do you optimize and train the algorithms to do the right prediction?
Gans: In terms of training the algorithms to predict correctly, state credentialing I would be surprised about, although one can imagine that, in certain environments, in deploying an algorithm in the field might have safety consequences. So you could imagine some approval process there. I think it is the case that developing these algorithms is not just shoving a bunch of data into some cloud service somewhere and popping out a prediction and not really knowing why it got there. I think we are in still the mode where there is a lot of art to it. And there’s a lot of art that can be taught through good data science and good statistical intuition and understanding where the data’s coming from, et cetera. So it’s not a plug-and-play thing yet. It’s not just a simple routine thing. And you know, there are a lot of decisions in the economy. There are a lot of decisions that are yet to have an algorithm and better predictions associated with them. I can imagine many decades before we run out of things to do.
Kerr: Okay. In the decades ahead, if reward function engineers are possibly on the rise, others look at radiologists or truck drivers and suggest those occupations might be dwindling or disappear rapidly. Where do you stand on those types of roles?
Gans: Yeah, about three years ago, Geoff Hinton, one of the pioneers of AI, said—right in this very building that we’re in now—said, “You should stop training radiologists right now.” And by that he meant, in a few short years—and I think we’ve gotten there—many of the tasks of radiologists—that is, looking at an image and telling us what’s in it—could be done better by a machine. Now, I don’t know in the case of radiologists what other things would get done in terms of their own discussions. Radiology has been hit by this sort of thing for the past 50 years. So they’re not going to be surprised. They may end up doing other tasks. But in cases like driving—so driving, truck driving, is one of the biggest occupational classifications in the whole United States. If we get self-driving trucks—and that’s still an if, even here in 2019, when we were supposed to have them already—if we got them, would that wipe out the truck driver? Well, when you start to think about it, you start to say, “Is the truck driver’s job driving?” Sure. It looks like it. But they do a lot of other things. They are there if something goes wrong. They’re with the load. They’re security for the load. They’re able to handle things at either end of the load. Now, I can imagine that the truck driver’s job could become a lot safer. I could imagine that the scale of the trucks—maybe they’ve got, rather than one truck, they’ve got five sort of networked—could go up. But would the job of somebody traveling with the truck, and supervising, and being with it go away? That’s harder to see. And I think what’s really interesting about artificial intelligence, like its predecessor, information technology, is it really causes us to start to think about, what is really fundamental about a job? You know, what is really necessary? And it turns out there’s more than you think.
Kerr: Yeah. That brings us to a framing going back to the computer revolution, stretching all the way back to the mechanical revolution: When something becomes very cheap, that affects the things that are around it. You want to be close to something that gets cheap. So if you look ahead to a world where prediction is going to become better and cheap, what would be your advice about how to train workers—prepare workers—for that environment?
Gans: So I think, you know, this will be different in every industry. And you know, having had similar questions asked of me over the past year, here’s my current thinking on that: Even when it comes to artificial intelligence and the sorts of things that can be predicted, they’re still extremely narrow. You imagine that you have a job, and a job is a bunch of tasks, and each task is a number of decisions that you might make. It’s getting easier to make an AI that can help you make a decision. In some situations, when there aren’t too many decisions, it can help you do a task. But being able to have an AI that can do prediction that you need for an entire job is very, very difficult. It’s just not something that can be done easily. And the reason is that it’s hard to teach an AI to coordinate among different tasks and handle the relationships between them. So we’ve had these AIs, for instance, that people tried five years ago to develop to be your personal assistant. “Oh, I know. We can do it all through email, we can do all your scheduling and things like that.” That seemed like the sort of straightforward thing that an AI might be able to do. But I think what proved hard is that AI is ... there’s one task of sending out a message for a meeting, but it’s sending it out to a human. Now if that human was reliable and you understood what they were going to say and all the nuances, that would be no problem. But they’re not. They’re human. And so the juggling that has to occur is too much for the AI. It’s always different with different situations, some sort of nuance somewhere. And you can never get up in the reliability to let the AI handle all your scheduling for you. That’s even if you had a clear enough schedule that you could communicate to the AI how much you want a meeting with someone.
Kerr: Yes.
Gans: And things like that, which we know end up taking place. So that’s the issue at the moment. And so I think that I’m optimistic on the front that most jobs will be improved as a result of AI.
Kerr: So augmentation rather than automation.
Gans: Exactly. It’s automation … Augmentation is an experiment. You can run it, see if it improves, and climb a little up the hill. Automation, that’s a whole thing. You have to get it 100 percent. And it’s only going to save you the money of that worker if you kick the worker off your bill. That’s a big ask. I mean, not even Tesla, which started from scratch in its machine-automated factory, building that in a traditional way, has been able to do that. They went too far.
Kerr: Yeah. Pulled back from some of the operations.
Gans: That’s a very automated industry.
Kerr: Yeah. So thinking about the executive, of course, they’re looking around saying, “What does this mean for us?” A lot of board members are also asking, “What should I be thinking about in respect to artificial intelligence? How should I be asking, ‘Is my company at the right spot?’” How does governance think about these tasks?
Gans: So I would think that the current message right now is: Don’t spend too much money. If you’re on the board and say, “I’ve heard about this AI. Go buy me some AI,” that’s going to be a bad, bad idea. Not only will that take your organization and have people running around doing things that aren’t necessary, it’s going to cost a lot of money, and it leaves you vulnerable to enterprise vendors that might sell you something very expensive when you don’t need it. Instead, what we would recommend is that you go down and you say to individual teams, maybe in some parts of the organization more than others, you say, “Okay, here is what AI can potentially do. It can help you predict and let you make better decisions. Where do you think, currently, would be nice to have eliminated uncertainty? And then let’s talk about can somebody do that if we hire a …”
Kerr: You start with the use case.
Gans: Exactly. We start with individual use cases. Now you, as a CEO, might want to keep your eye on the prize and say, “There is some fundamental uncertainty impacting my organization, and maybe one day AI can handle that in order to change our business model.” The example we give in the book is, if you imagine Amazon, Amazon currently has a business model that involves you shopping. You order something, and they ship it to you. What if Amazon got so good at predicting what you wanted that, instead of waiting for you to shop, it just shipped it to you? Now, it currently can’t do that because most of what it would ship to you would be a mistake, and you would have to send it back, and it would be very expensive. Maybe annoying and other things like that. But if it did so well, and all of a sudden, you come home and there’s a box from Amazon sitting there, and there’s about 10 items in them and you need eight …
Kerr: Yeah.
Gans: … that’s pretty good. And you haven’t had to wait for it. It has come when you need it. I mean, right there! “I was thinking, I was just out of paprika, and I wanted to use that tonight.” You know, that would be wonderful. Now, we can imagine that. And, actually, it’s funny—Amazon seems to be making little bits of moves that look like that idea. But the point about that exercise is, there’s a fundamental uncertainty that Amazon is really dealing with, or cannot deal with now. And you say to yourself, “Can we imagine an AI that deals with that?” And that’s where we look for the startups that might disrupt your business and things like that.
Kerr: Okay. So looking out 50 years from now, going back to that big, long horizon, we have some people that imagine a blissful utopia out there, some that have jobless futures and that there’s a dystopia. Where do you guys come down on that: optimistic, pessimistic?
Gans: Well, we all have different views. We all have different opinions. We almost thought of writing the last chapter of the book as a conversation.
Kerr: Choose your own adventure?
Gans: Yeah, choose your own adventure! Yeah, basically, what are you? So I can tell you mine after doing this. Right at the moment, I am on the side of a bit AI pessimistic. So, in order for AI to destroy jobs, it has to be really good. And I don’t think it’s good enough to do that yet. But I’m also on the side that in 10, 20 years, we could start to have that sort of effect. But I still play in the idea in which—we had this through now through centuries of history—that there’s always something else to do.
Kerr: Yeah.
Gans: If something else gets cheap, there’s something else to do.
Kerr: New work always can be developed.
Gans: Exactly. I know that that’s certainly the case with someone in my job, obviously, but it seems to be the case for people at the above-average salaries in the workforce. The question is: What is it for the entire population? That’s a bit harder to tell. But we’ve had that issue right now, anyway. We’ve had this issue of skills being devalued and not being able to find jobs with a skill premium and switches in the middle of life. So nothing’s new with that. I don’t think AI’s going to be any worse on that front. I don’t have any crystal ball to tell me that it’s going to be better, either. But I don’t think there’s anything that AI has plopped on our doorstep that says, “Now we have to get real about this issue.”
Kerr: Yeah, it’s comparable to your board member saying, “Don’t spend so much right away. Hear a little bit more. Let’s take this in steps.” This is not so different from the classic, “Is this time different?” You’re kind of saying: Maybe it’s different in form, that it’s now about prediction, but it’s not different in the pace that you’re anticipating.
Gans: Yeah. I mean look, look. If you take a situation where we develop some great AI—which I guess would come, highly likely that can handle call centers—well that’s going to decimate all these call-center jobs. And there’s some localities that rely on them a lot. That’s going to be dislocation. Now, that’s not an unprecedented dislocation that we’ve seen in the past, but that’s something that would be on the horizon. If you are in a call center, this is an issue, and if you’re in a government that relies on it, maybe you can think about what sort of proactive actions you can take. But once we get beyond that, it gets really hard. It gets really hard to tell. So you can just imagine that you could have wasted government policies, other things like that. If the argument is to slow down AI, that doesn’t sound like the right way to go. As usual, we get back, when we got dislocations, we like to have people move around. And, of course, that’s your territory.
Kerr: Yeah, yeah. As you think about your work at Creative Destruction Lab (CDL), this book was an outgrowth of the three authors and their work there. Tell us about what CDL is doing, and what you guys have on the horizon.
Gans: The Creative Destruction Lab is a seed-stage development program that now is in six more sites around Canada and extending out to beyond North America next year. Five years ago, we started to notice a lot of the startups coming from our program—which are primarily university science based—were built on AI. That’s how we learned about it, and that’s how we were a little bit ahead of the curve in sort of seeing this stuff happen.
Kerr: Toronto’s very much at the forefront of artificial intelligence research and development.
Gans: Exactly. We’ve still got several streams of artificial-intelligence‒focused startups. But we’re now branching out, you know, trying to be ahead of the game. We have program-streaming quantum machine learning—being able to take machine learning and use them on various sorts of quantum computers around the place—which we ran last year, and much to my surprise has already generated a number of viable ventures.
Kerr: Wow!
Gans: That’s quite extraordinary, given the way quantum computing is. I think we also have another one where we link the AI and the blockchain. This is a stream that I’m in charge of. The blockchain is a technology that one of the promises of it is that it could provide data with high integrity. The jury’s definitely still out there on that one. It’s out on the blockchain, and it’s out on combining it with AI. But we’ll see what happens.
Kerr: Joshua, beyond your book, which I highly recommend, is there something you would suggest to people that feel caught a little off guard, flat-footed about artificial intelligence and its future?
Gans: Yes. There’s this book by a British mathematician called Hannah Fry, and the book is called Hello World [Hello World: Being Human in the Age of Algorithms]. It’s just a beautifully written exposition of what this technology can do and what its risks are and things like that—the sort of explanation that we did not provide in the book in the same way. I think that is definitely worth looking at. I think there are some very interesting online resources that are out that try to explain these things, to reach their own on those sorts of things. I should also plug that I have a second book coming out, Innovation + Equality, by MIT Press that I’ve co-authored with a colleague [Andrew Leigh and Lawrence H. Summers].
Kerr: Give us a minute about that.
Gans: That book is a ... whereas Prediction Machines is microeconomics, Innovation + Equality is a more macro-level view, trying to distill for policy makers and anybody else who’s interested, is it really true that if we want to encourage more innovation, we have to put up with a less egalitarian society or greater wealth inequality and things like that? Basically, our argument is both from a perspective of economic theory, and also the evidence we’ve seen thus far, there’s no big tradeoff between those things. In fact, you can have more innovation and have more equality at the same time, and they’re not going to cancel each other out. In some cases, they’re going to help us through that. We think that’s an important perspective to have, especially with things like the future of work and artificial intelligence, because it tells us what we should be fearing. What we should be fearing, of course, as usual, is bad and misdirected government policy. We should be fearing things like traditional stuff like market power. But should we be fearing innovation and its negative consequences? No. We can deal with that.
Kerr: Joshua, thanks for telling us about Prediction Machines and the future of artificial intelligence.
Gans: Thank you.
Kerr: And thanks to all of you for listening in.