Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Managing the Future of Work
  • Newsletter
  • Partners
  • About the Project
  • Research
  • Faculty & Researchers
  • Media Coverage
  • Podcast
  • …→
  • Harvard Business School→
  • Managing The Future of Work→
  • Podcast→

Podcast

Podcast

Harvard Business School Professors Bill Kerr and Joe Fuller talk to leaders grappling with the forces reshaping the nature of work.
SUBSCRIBE ON iTUNES
  • 18 Jan 2023
  • Managing the Future of Work

AI: The good, the bad, and the transformative

Is it too late to secure the guardrails? More and more businesses are turning to AI for its efficiencies and revolutionary potential, but its proliferation has sparked widespread skepticism and questions about equity, privacy, liability, transparency, and security. AI expert and entrepreneur Manoj Saxena parses the business, policy, ethics, and workforce implications.

Bill Kerr: In some ways, artificial intelligence was an easier sell when it was less advanced. Organizations now face the complexity and hard choices that come with day-to-day use in increasingly sophisticated and sensitive applications. While it continues to push past theoretical limits, AI in practice has often proven more headache than transformative technology.

Welcome to the Managing the Future of Work podcast from Harvard Business School. I'm your host, Bill Kerr. My guest today is Manoj Saxena, Founder and Chairman of The Responsible AI Institute and generative AI startup Trustwise. We'll talk today about how AI is transforming work and society, what constitutes responsible AI, and what's at stake. We'll also talk about why, when it comes to AI, trust is as important as security. Welcome to the podcast, Manoj.

Manoj Saxena: Thanks, Bill. Thanks for the opportunity.

Kerr: Manoj, let’s begin a little bit with your career background in AI, which includes the early work at Watson and also the start-ups. Tell us about how you came to where you are today.

Saxena: Sure. Broadly, my career sort of has been in three phases. The initial part, after getting my degree from Michigan State, an MBA, I joined 3M. I was there for seven years as an executive, kind of cut my teeth on the business side. Then I had the opportunity and the benefit of creating and selling two software companies, which were venture funded. I ran those as a founder and CEO. And my second company was bought by IBM, which is how I ended up at IBM. And midway through my time at IBM, the IBM board gave me the privilege of commercializing the Jeopardy system of Watson and said, “Let’s go out and build a business around it.” I did that for about four years, ran the program. We put $1 billion of investments into it. And then I went to the next stage of my career—after being an entrepreneur and being a big-company executive—as a venture investor and educator, which is sort of the phase I’m in now. And my focus on the venture side is primarily on responsible applications of technology, particularly exponential technologies like AI that are a double-edged sword, and also around some of the emerging things like Web3 and generative AI and stuff like that. That’s kind of broadly what has brought me. I was born and raised in India, in Hyderabad, so it’s been a long way from there.

Kerr: Long journey. Beyond beating out contestants on Jeopardy, AI’s got a lot of use cases. And some of them range from where you’re augmenting human efforts to places where you’re automating entire processes or structures. I want you to help us—from your investor side, from your technology side—where do you see the greatest promise? How’s that world going to shape out over the next decade?

Saxena: Despite the name “artificial intelligence,” there’s nothing artificial about it. This stuff is, just like the industrial age amplified our arms and legs, the AI age is going to amplify our minds and our brains. I believe 85 percent of the opportunity with AI will be in augmentation of human insights and human creativity. The rest of it is mostly around automation and autonomous systems. To me, the real value of AI is being a partner and a collaborator for us as human beings. There are two primary areas in which AI will work as a partner and collaborator. One is around generating inferences and insights. So if you’re a knowledge worker, say a nurse managing diabetes treatment, or if you are an underwriter doing insurance policies, or if you’re a call-center person taking a call, and AI is going to be helping by tapping you on the shoulder and saying, “This person who’s calling you is most likely calling about this claim, and it’s on line 72. They will ask you, and here’s the answer to give them.” It’s like having the power of 10,000 best experts standing behind you as you’re doing your job. The second exciting part is around generating new content and new ideas—almost like a digital muse. And this is where generative technologies, like GPT-3 and others, where AI is going to start helping you. So if you’re a lawyer creating a new brief for a case that you’re going to say, “Take this line of defense.” Or if you’re looking at renovating your apartment, and you feed it a picture, it’ll come back and say, “If you want a modern architecture versus a neoclassical one, here’s how the room might look like.” That’s the other exciting part, is using AI as a tool to expand your creativity. And that’s the world I think we’re just about getting going with.

Kerr: I mean, I think what’s interesting about those scenarios—ranging from the chat bot helping the customer-service representative all the way up to that point on the creativity side—is that there was still a part for a person still there. A human still there. And is that just the limits of the technology that you are seeing? Or it can’t handle all the edge cases? What stops short of the functionality behind the chat bot just saying, “I can take it all the way from here. I don’t need you anymore.”

Saxena: Well, in certain cases, technically you probably could. I mean, if you look at airplanes, most of the airplanes today, you have these two people with a hat on in the front acting as if they’re flying the plane. Actually, it’s machine intelligence and expert systems that are flying the plane. I think there is going to be a very important role for human beings all the way through, which is around things like interpersonal engagement, emotions, creativity. AI will take away a lot of jobs that are involving boring work that is just like drudgery. I think that can easily be removed by AI. Where it’ll start shining next is around driving process efficiency and process automation. And then the final part is the digital muse, where you’re actually going to be using AI as a collaborator. And what I believe … people are worried about AI taking away jobs. My belief is more about AI will take away tasks. And if you don’t know how to use AI in your work as an employee of the future, then you’ll be replaced with someone else who will know how to use that technology as a part of your workflow.

Kerr: And typically, what we find are the skill sets that are very complementary to something that’s becoming quite cheap. Those skill sets become super valuable. And as AI becomes very good at this predictive nature, the insights you described—being able to act upon it or, with the airplane pilot example, show my kids the front of the cockpit—those could become still very valuable.

Saxena: Scully, Captain Scully, whose work was needed to land it in the harbor, AI wouldn’t have been able to do it. There is absolutely a role, a synergistic role, for humans and AI.

Kerr: We see companies pursuing AI in ways that just seem tactically quite different. Some are very decentralized. Business group leaders are given the mandate to find use cases important for them, and others are a very technology-centric—a center-of-excellence type of approach. Do you have a prescription as you think about enterprise AI for how this should be enacted?

Saxena: I do about three or four dozen talks with boards and CEOs every year, and one of my first statements to them is that AI is too strategic to be left to technologists. This is a transformative business capability, and it needs to be owned and driven by the C-suite. This is not a linear technology. Now, these systems are learning and growing every day. If you get it wrong, it could create massive unintended consequences. It could create massive financial damage, brand damage. And at an operational level, I talk about focusing both on automation and augmentation. There are business processes—I call it “boring makes billions”—there are a lot of boring business processes that you can automate with AI. And then there are also some knowledge-intensive processes that you can augment with AI. That’s the second area. And starting with a use case, doing it in 90-day increments. Financially, I ask them to think about AI from a business outcome first and working back, not data and models first and working out. One of the disservices that the technology industry has done is, we have started teaching the market to look at AI through the wrong end of the telescope. We look at AI as a data and algorithms problem, whereas we should be looking at AI as a human impact and business outcomes, a societal impact, problem and then work back from there. As you look at Managing the Future of Work with your series here, that’s one of the fundamental questions I think leaders need to start reimagining and rethinking about AI.

Kerr: Those are steps they should all take, but which ones would you bet on? What’s the role of the CEO or the senior leadership team that says, when you see that, you circle it and say, “That’s a company that I think is going to make it to the next big stage of this AI revolution.”

Saxena: Number one for me will be how switched on are the C-suite executives around looking at AI as a transformative technology from a business? How will it help my customers? How will it help my business partners? How does it make me more efficient? The second part is, you need to be able to have not just the IT person and the AI developer and the product expert; you also need to have a lawyer and a marketing person and a PR person, as well as people who are looking at these systems from an audit and explainability perspective. And the third and final part I look at is, how clear are they about doing this in an incremental manner? In 90 days, you should be able to get quick results out of it and pay for the success of its scales versus doing it over a two- or three-year project.

Kerr: So good things for us to look out for. Let us turn now toward though the ethics part of the conversation, the work that you’re doing with Responsible AI and so forth. And I want to just begin with having you ground us in some of the specific cases, examples of biases that you found consequential or impactful, that you want to work on correcting toward the future?

Saxena: The answer very soon is going to be everywhere. I mean, we are surrounded by AI. People worry about job losses with AI, but the real thing to worry about AI are these invisible algorithms that are determining what news you see on your feed, what jobs that you’re applying for, whether you’re getting accepted, what loans you’re getting for your home, what college your kid goes to, who you date, what music you listen to. Some of the big use cases that worry me are around things like lending. Results have shown that LGBTQ couples get a lot higher mortgage rate denial than other couples. Things like healthcare. There was a large lawsuit that was filed recently that minorities got lesser healthcare for similar health conditions than non-minorities did. Devices—facial recognition systems and devices are proven to be biased toward light skin, because the training data has this default of whiteness on which systems are being trained. So if you’re using it for heart rate detection or some medical conditions, you have massive amounts of issues there. Human resources, we talked about it in the introduction—whether you’re applying for a job or your promotion or you’re deciding who to let go—HR systems have already got AI models. So all of these are things that are worrying me around this notion of, I call it, we are “automating inequality” right now. We are automating inequality at scale, and there is no governance, and there is no trust in these systems.

Kerr: Is this something that is very visible in the data? Is it something that operates at the 1–2 percent level? How does this bias kind of manifest itself? And do you anticipate it getting just naturally better or worse without some of the things we’re going to talk about in a minute—certification and others?

Saxena: Gartner predicts that as many as 85 percent of AI projects by the end of this decade—in another eight years—will provide false results. Okay, 85 percent. So imagine if you’re running your business on these processes, and they have problems around bias, problems around lack of explainability, problems around compliance, problems around how do you really make sure that these models are not getting hacked. This is like cybersecurity. I call this space “cyber trust.” Cyber risk to me is cybersecurity plus cyber trust. We are just in the early phases of that. The magnitude of this is massive, because the consequences for this are, like, the EU has passed a regulation that says up to 6 percent of a company’s revenues could be penalized for every instance that happens. Sixty-one percent of people have said that they have experienced issues where they have lost revenues because of this. Sixty percent of the people said that we have experienced issues where they have lost customers because of this. At the Responsible AI Institute, actually, we have a thing called “Rogue AI Heat Map,” where you can actually see incidents of AI going wrong around the world.

Kerr: Wow. So let’s go back. The 6 percent figure was one that’s eye-catching. Is that 6 percent of the revenue related to the AI product, or is that 6 percent of the company’s overall revenue?

Saxena: Overall company’s revenues.

Kerr: Wow.

Saxena: It’s a big number.

Kerr: It’s a very big number.

Saxena: And this is the next one after [General Data Protection Regulation] GDPR. Now on the U.S. side, there was U.S. AI law, an algorithmic accountability law. So this is going to become a very significant part of risk management for companies, as well as opportunity. Because what’s happening right now in the absence of this, you’re building up residual risk. And you asked me how this happens. It happens across four things. One is because the data that you’re sourcing has biases built into it. Second is the teams that you are using to build it, they have latent biases they may not be aware of. That’s why cross-functional, cross-gender, cross-ethnic teams are important. And third is, as you are designing and deploying these systems, you’re not measuring it. So it’s a combination of things that could give you that biased as well as hackable AI at the end of it.

Kerr: Do you think that it’s the fear of the fines? Or is it public embarrassment? Or maybe it’s even employees and customers being rightly and egregiously upset about stuff? What’s going to be the source of change or what makes a company come and say, “We’ve got to do a lot better at this?”

Saxena: That’s a great question. So it’s interesting. Companies that tend to be the leaders in their space, they tend to take a more positive and a proactive thing. They say, ”Hey, by building systems that are, number one, more transparent, I can build more trust with my customers, so my brand value can increase.” But unfortunately, there are many more who look at it as risk mitigation. So there are areas like, “If I don’t comply with the rules and regulations, I’m going to be penalized. If I end up having my lending system discriminating against a certain group, I’m going to have lawsuits. If I end up providing services, like my chat bot going racist”—which is a good example of a Microsoft chat bot saying Hitler is a great guy and the Holocaust never happened—"I’m going to damage my brand.” It’s a combination of things around compliance, financial losses, brand losses, as well as employees. So the more ethical and the more transparent I am, the more employees are going to stay, and the best I can attract to my company.

Kerr: All large organizations already have inside a risk-compliance-type function area. How is that interface between technology and risk compliance currently operating?

Saxena: Very poorly. So one of the things that is a big problem that I see—and it’s going to get fixed over time here—is, companies have what I call a “three line of defense model.” Line of defense one is where IT and business and product people sit. Line of defense two is the risk and compliance people—so think of it as second tower. And line of defense three is internal audit and external audit. Today, 95 percent of the work is being done in the tower one, line of defense one. And when they’re finished with an AI, they throw it over to the risk and compliance people. There was a report that says 8 out of 10 projects in AI were being stopped from going into production for risk and compliance, because they’re saying, I can’t explain how this complies on the bias requirements for a [Health Insurance Portability and Accountability Act] HIPAA rule or an [International Financial Reporting Standards] IFRS, because these two groups can’t talk to each other. When they say, “Show me how this complies to this,” they say, “What do you mean? I have a Gini index of 0.6.” And the risk and compliance person says, “What does a Gini index mean in the context of this law?” Not only that, the risk and compliance people then have to work with auditors, internal audit and external audit, and that’s where the whole system is broken. These three towers, they don’t talk, and they don’t connect with each other.

Kerr: So it’s a lack of return on investment that’s been made in those use cases. I’m sure it’s demoralizing to the technologists to have developed this, and it’s ready to go, and then it gets held up.

Saxena: Massive technical debt. We call it “technical debt,” where they have to go back and redo this work to make it more transparent and explainable and bias-free and compliant.

Kerr: I want to go back to your phrase of “cyber trust.” What do you mean when you’re saying “cyber trust,” and how do you anticipate the world looking in 2032?

Saxena: Without trust, we don’t have a digital economy. There will be chaos online, there will be political and social upheavals, because misinformation is going to spread very fast. People are not going to trust the brands. People are not going to trust the products and services they’re buying. And particularly, when people start understanding that companies are using AI in their chat bots, in their email systems, in their business processes to accept or deny a loan or come after you for credit default, AI will not be adopted, because the risk and compliance people and the regulators are going to tell you that you cannot deploy it unless you make this thing explainable and transparent and compliant. Responsible AI design and governance is something that companies are waking up to and saying, “It’s not something I can bolt on after I’m done with the design of an AI system.” We’ve seen this in cybersecurity. So cyber trust is going to take the same path around how I source my data, how I build my models, how I use the data and models to drive insights and decisions. All of this needs governance and transferability and trust that doesn’t exist today. And that’s why I believe that this is a very important and a rich area that’s only going to become more important at a board level and at a company level.

Kerr: Let’s turn to the Responsible AI Institute, which you founded for this. Tell us a little bit about the purpose of Responsible AI Institute. And also who’s leading it, how’s it sort of organized and funded, and those kinds of basics about the organization?

Saxena: Yeah, thanks. So second year of running Watson, I was speaking at a big conference about Watson in Washington, D.C., and there was a gentleman in the second row, he put his hand up and said, “Oh, I see what you’re doing. You are building an AI that will determine [for] my wife, who’s got stage 3 cancer, whether she’s going to get this healthcare package or not. So, in essence, you’re building an Obama death panel machine that will decide whether my wife will live or die.” That question really hit me. And he says, “Can you explain how the machine did what it did? Can you explain how it’s not going to discriminate against my wife versus someone else?” So that whole thing got me on this path, that there was a somewhat naive belief in the technology community that technology equals good. And it set me on the journey to start building what I call an “AI assurance” and “AI certification” type of a model. And it had to be, by default, nonprofit, so that we are not aligned to any particular vendor. So I founded the Responsible AI Institute as a 501(c)(3) nonprofit six years ago when I saw that what was coming around the corner was pretty ugly and pretty damaging to society and to business. And I founded it on three different tenets, my vision. One was that that institute will focus on human impact first and work backwards toward the tech. Number two, that it will be a do tank, not a think tank, that will be driven by community. And number three, that it will itself be accredited from leading organizations like ISO and IEEE and others. We can then go forward and start doing conformity assessments to say, “We can confirm that the way you’re building and designing your AI is going to give you positive results.” And we can even give certifications. So working with partners like Deloitte or PwC, we can even help them get a third-party certificate on that. So I consider this to be the most important work I have done in my life. And it’s a work that’s not going to get done in my lifetime. It’s the work that’s going to continue for tens of years here.

Kerr: Let’s dig more into the certification. Tell us a bit about that program. Is it always one of the large auditors that is doing the certification process of the third party? And at your levels, do you have any gold, platinum, double diamond executive—just using some of the airlines here. But how are we currently looking as business compared to those levels of certification?

Saxena: Overall, when someone engages with us, they can engage in one of three ways. So you could be a corporation or you could be a builder of software technology, like an HR system. So you could both come to us and say, “Hey, I want to assess and certify my system.” And when we certify, they could certify it three different ways. First-party certification is where they can self-certify. So if you’re a large enough tech company, and you can say, “Hey, I’ve taken the conformity assessment from Responsible AI Institute, and I have self-certified that I meet the requirements.” Second-party certification is, any of our partners who are system integrations, companies, or non-audit firms, non-big four, they can come in and certify you. And that’s for a medium level of trust. And the highest level of trust is where an audit company, a recognized auditor, can come in and give you the thing. Not every system that a company has needs to have a high level of trust. There might be simpler AIs or a dataset that they may want to self-certify, and there may be others that they may want a third-party certification.

Kerr: And to go back to the question of the various levels, how are corporations tending to find themselves when they go through these assessments? Are they happy with their results? Is it a little bit scary to them?

Saxena: Yeah. So one of the other important tool that we have published is a Responsible AI Maturity Map. So think of it like a [Capability Maturity Model] CMM software development map, and that has five levels to it. And level zero is unaware. And two years ago, there were a lot of companies who were unaware, but now I can hardly find any that are not, because boards, and investors, and customers are asking for it. So level one is someone who is aware and is playing with it. Level two is someone who’s got a tactical project going on. Level three is someone who has actually got a strategic company-wide initiative on it. Level four, which is the highest one, is someone that is using it across their network with their suppliers and their partners. And most companies today are in level one or level two, where they are just getting going, they’re putting their policies in place, and they’re working with companies like Responsible AI Institute to say, “Can you help me lay down the foundation for what kind of policies and control should I have?” Or if I’m working on a few use cases, “Can you come in and do an assessment as to how well do I confirm to my rules and regulations?” And broadly, they’re doing it for four reasons. You asked about the platinum company. Number one, they’re doing it to prepare for and comply with AI laws and regulations. Number two, they’re doing it to mitigate business and technical risks as AI gets into all their processes and workflows. If I’m using it to make decisions, or if I’m building it into my product that I’m selling, I need to make sure that I’m managing the risk. Number three, they’re using it to avoid significant technical debt, so they don’t have to go and rewrite the code and redo the product line. And last but not least, they’re doing it to demonstrate leadership and to build customer trust. So when they come to us, these are the four reasons that they would use it for. They would then do a diagnostic on the maturity curve. Then they will give us one use case to say, “Can you do a conformity assessment?” And then some of them would say, “Can you certify it all the way through?” So most of them today are in the conformity-assessment phase, and many of them today are now wanting to work toward certification with auditors.

Kerr: The algorithms are so important toward the future that many companies view this as the most protected of their intellectual property. And should they have to give the algorithms to you, will they have to give them to regulators in the future? How does that sensitivity come into play here?

Saxena: First of all, way before you touch the algorithm, what we look at is assessing the processes by which they’re designing an AI, by which they’re building an AI. So we do sort of a process-maturity check. Then the second thing we do is we look at the data flows, what kind of data they’re getting, because data is the fuel that’s feeding these algorithms. So understanding data. And then the third part we look at is what kind of models they’re using, what class of models? Do they tend to have these problems? So we don’t ever ask them to give their core IP to us. This is more about, think of it like a TurboTax. It’s like a TurboTax-like flow that makes sure that we are covering all these areas. And at the end of it, in the stages in there that we will ask them to upload a proof of work that says, “Okay, now feed us your bias score” and things like that. The method has been built over the last five years that we can actually go through all this in a collaborative and non-invasive way. And outcomes at the end of it: a conformity report, and as well as remediation steps as to what do they need to remediate so that they can move toward certification.

Kerr: But let me continue, though. At some point it does seem though that, for us to really understand the biases and structures, we have to get inside the black box. Or we have to somehow interact with that. How’s that going to happen if it’s, again, something that’s such a guarded piece of the companies?

Saxena: Great question. So there are a lot of AI governance tool vendors who are actually today going in and doing model measurements, model monitoring, model risk. We are not in that space. So we would work with any of these. So Amazon has their tools, Microsoft has their tools. And there are start-ups like Fiddler and Credo and TruEra and CognitiveScale who are all building their own tools. We are agnostic to the tool. So the analogy to look at that is it’s like, if you’re looking at your engine block in your car to see how it’s performing, we have a method and process to assess your car and your engine block. But we would use these vendors’ tools to say, “Give me the oil pressure, give me the piston and RPMs.” And we don’t get into that space.

Kerr: It sort of sounds like you’re using AI to understand the challenges of AI or overcome some of the challenges of AI.

Saxena: A hundred percent. This is the only way to do it. Trying to govern AI humanly is like trying to outrun a car or outfly a plane. Humanly, it’s impossible. The only way to manage and govern AI at scale is to use other AI.

Kerr: This is a big question for many boards. And to refer back a little bit earlier in the podcast, we talked about the sizable risks that are sitting inside organizations. How do you anticipate corporate governance evolving and bringing this into the boardroom on a more regular basis?

Saxena: There’s three different ways, and corporate governance is going to change. Cyber trust and AI trust is going to move from just measuring model bias or model explainability to something that actually looks at rules and regulations and data and everything else. There’ll be a much more exhaustive framework to look at this stuff rather than simple metrics. The second thing is deeper. Corporations are going to start going deep into looking at their processes, as well as their algorithms and their data. So that’s why we build the Responsible AI Maturity Model that is going to give them a sense for how is it that they are performing at a company level. Because if they want to scale with AI, they need to industrialize this process. So they’re going to go deeper into their software development processes, into their how are they buying the product, how are they buying data, how are they buying models. All of that is going to be refined. And they’re going to go deeper into workforce skill sets. And what kind of skill sets do I have? What kind of skill sets do I need? Business executives are going to be needing what I call a “real-time dashboard” or a cockpit of what is the total value at risk.

Kerr: So I’m going to ask maybe the impossible question here. What does all this then mean for future leaders as they look ahead to a world where I think it’s beyond doubt that AI will play just a transformative role?

Saxena: Fundamentally, leaders need to look at AI through the lens of both economics and societal impact. Today, everything in a company is being driven from a total shareholder value perspective. Tomorrow, I think it needs to be driven … the CEOs and the future leaders need to look at it, not just as total shareholder value, but total societal impact. Think of this as the meta version of an ESG, but a lot more real and a lot more prevalent. So the role of a future leader is to take a thoughtful approach to AI innovation and to build a workforce that is empowered by AI in how they work, as well as what skill sets they have, and finally to have... I call it a “societal and a model vector” behind it. It’s not just about like a web technology, where you have web apps. They have to be thoughtful about, “How am I responsibly deploying this technology, into which processes, and what are the unintended consequences it may cause? And should I really be deploying it over there versus not?” And the second part, which I’m very excited about, is new technologies like these foundational models and generative AI is going to unleash massive amounts of creativity, when you start having AI as a partner and it starts generating new concepts and new ideas for you. So the future leaders have to look at how can they transform the workforce so that they can use AI as a partner beyond just automating processes. And the other part is, how do you do better hiring? How do you use AI to work, to put to work in your own HR processes, so that you can hire people better to do more workforce learning programs? And one of my pet peeves is diversity. There was a study that was done by Brookings Institute a few years ago that showed that women have better digital skills and aptitude than men, in general. Yet, if you look at the percentage today, women make up less than 30 percent of the workforce. So if you are a future CEO, you got to be looking at saying, “How do I bring in gender diversity?” Not just because it’s good to have gender. It’s because it’s good for business, because they have been shown that they have better skills and aptitude.

Kerr: What do you see as the role of the education system, universities, and also certifications that may be provided outside of the traditional education structure?

Saxena: The core of it, I think will happen at two levels. One is at graduate-level education, where we have to get the next generation of leaders who look at the problem from a human lens, as well as cross-disciplinary. So it’s not just a business school problem. It is getting the law school, getting the computer science school, getting the ethnography and the human design. Number two is we also have to start driving workforce education through executive education. There are lots of leaders today who are already making decisions on this. So exec ed programs are going to become quite important. And then third, is in addition to these broader exec ed classes, there will be certificate courses on deep specialization. So it could be around how do you manage data supply chain for responsible AI. It could be around how do you work with regulators on putting new AI policies in place. It could be around when an AI incident happens, how do you respond to that. What is crisis management when an AI goes rogue? How does that look like? A whole ecosystem beyond university education, of specialized technicians through apprentice.

Kerr: Let me just continue with a segue into the policy side. What’s the future of the policy side do you see? Or what are some things maybe you would propose, policy-wise?

Saxena: This is the other important work I’ve been doing, in addition to working with corporations. Our Executive Director at Responsible AI Institute, Ashley Casovan, she comes from the government of Canada. So we are working with corporations, we are working with governments, and we are working with also groups like the U.S. Department of Defense and military establishments to see how they are thinking about it. But broadly, the challenge is to come up with the right policies and regulations that do not stifle market innovation. But I think policies and controls are needed. One, it needs to happen at a company level. Your company needs to have their brand values and the corporate values implemented into digital systems. So just like your human worker is told what the company values are, how do you teach a chat bot worker what the company values are? So one is at a company level. Second is at a state and national level, right? Texas may have different things versus Illinois versus California. And third is, consumer-level awareness and activism, I think, is going to be another set of policy drivers. And last but not least, the purpose of this podcast, which is future leaders. They themselves have to start thinking about what policies and strategies they need to put in place.

Kerr: Maybe as a final just wrapper upon our conversation, I want to go to 2032, a decade out, and just ask you how confident are you that we have an improved world due to artificial intelligence? And what might that world look like for us?

Saxena: Over the long term, I have no doubt that we can build a better society with AI. It’s going to be ugly getting there. There will be a lot of, unfortunately, disasters and penalties and fines before we get there. But over a 20- or 30-year period, I think we can build a world, we will build a world, where decisions are driven by human and machine collaboration, not just humans. We can build a world where opportunity is a lot more fairly distributed across groups, because the machine has now been trained and tested for bias, and they’re doing that. And decisions are made that are more transparent and explainable than what we do today. I also think business executives will have real-time dashboards of, “What is the impact and what is the risk hotspots of where I’m introducing AI?” And on the other side, there will also be, unfortunately, a lot of bot farms and product farms—so a lot of fake accounts are going to show up, where people—not just on Tinder, but also on Instagram and everything else—where AI’s going to generate a person that doesn’t exist, a picture and a video of the person. They can also generate, they can tag, then, products that are biased, and they can then start tweeting on behalf of it, all of it done by AI, and then collecting money for products that may not exist. So we will also see a downside and a dark underbelly of this, where there will be a lot of fraud and other instances that go on. But my hope is that, above all of these, if you build the right class of leaders who are schooled and graduated with a human-centric mindset around AI, we can absolutely build an amazing society, because technology by itself is neither good nor bad. It’s how we put it to work that makes the difference.

Kerr: Manoj, thank you so much for joining us today.

Saxena: It’s my pleasure. Thank you.

Kerr: We hope you enjoy the Managing the Future of Work podcast. If you haven’t already, please subscribe and rate the show wherever you get your podcasts. You can find out more about the Managing the Future of Work Project at our website hbs.edu/managingthefutureofwork. While you’re there, sign up for our newsletter.

SUBSCRIBE ON iTUNES
ǁ
Campus Map
Managing the Future of Work
Manjari Raman
Program Director & Senior Researcher
Harvard Business School
Boston, MA 02163
Phone: 1.617.495.6288
Email: mraman+hbs.edu
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Accessibility
  • Digital Accessibility
Copyright © President & Fellows of Harvard College