Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Managing the Future of Work
  • Newsletter
  • Partners
  • About the Project
  • Research
  • Faculty & Researchers
  • Media Coverage
  • Podcast
  • …→
  • Harvard Business School→
  • Managing The Future of Work→
  • Podcast→

Podcast

Podcast

Harvard Business School Professors Bill Kerr and Joe Fuller talk to leaders grappling with the forces reshaping the nature of work.
Apple Podcasts
Spotify
Amazon
More Ways to Listen
iHeartRadio
Player.fm
Audacy
Castbox
Pocket Casts
  • 02 Jul 2025
  • Managing the Future of Work

Designing Equitable Workplaces

The Harvard Kennedy School's Iris Bohnet and Siri Chilazi on the logic behind a systems-level approach to workplace fairness. How A-B testing and targeted interventions—incorporated in day-to-day workflow—can help organizations tap more of the talent pool.

Joe Fuller: There don’t seem to be any small-stakes business decisions anymore. Automation, return-to-office policies, hiring and promotion—all have sweeping implications. And in each case, leaders face a basic question: Will employees judge their actions to be fair? How they respond can affect everything from engagement and retention to innovation. When notions of fairness and equity are hotly debated, how can businesses cultivate a culture and systems that reward even-handedness?

Welcome to the Managing the Future of Work podcast from Harvard Business School. I’m your host, Harvard Business School professor and nonresident senior fellow at the American Enterprise Institute, Joe Fuller. It’s my pleasure to welcome Iris Bohnet and Siri Chilazi to the podcast. Iris is a professor of Public Policy at the Harvard Kennedy School of Government, where she co-directs the Women and Public Policy Program. Siri, an HBS graduate, is a researcher in that program. Iris and Siri are co-authors of Make Work Fair: Data-Driven Design for Real Results. The book presents a strategy for advancing workplace equity by redesigning systems and structures. We’ll talk about what it means to approach fairness as a design problem and the tools employers can use to cultivate environments that engender both performance and alignment. We’ll also look at how behavioral sciences can inform practical solutions that rely on incentives and outcomes rather than moral values—and, ultimately, why it’s more effective to look to change systems than to change individuals. Iris and Siri, welcome to the Managing the Future of Work podcast.

Iris Bohnet: Thank you for having us.

Siri Chilazi: Thank you.

Fuller: Iris, maybe I could start with you. You’ve been a distinguished scholar and observer on topics of opportunity in the workplace. Maybe you can give us a little bit of your cursus honorum to reach this stage in your career and why you became interested in this. And then, also, how it is you happened to start collaborating with my former student, Siri?

Bohnet: Yeah, very happy to do so. I’m a behavioral economist, and as behavioral economists, we are interested in helping people, organizations, and society make better decisions. And that’s really what I’ve been working on for maybe a dozen years early on in my career, when much of my work focused, for example, on trust and fairness, and negotiation, bargaining, et cetera. And then I started to increasingly work with organizations, and I became intrigued by the question of how we make decisions about people. And so people management became close to my heart. And then, of course, the question of whether we really create equal opportunity, and whether we—when I say “we,” we the world, the organizations—really benefit from 100 percent of the talent pool seemed like a natural next step. And that’s what I’ve been doing for the last 15 years or so. And then Siri joined my research team here at the Women and Public Policy Program at the Harvard Kennedy School about, now, Siri, I don’t know, eight years ago, maybe?

Chilazi: Almost a decade ago. Yeah.

Bohnet: Almost a decade ago. Wow. And then, in any case, and so we’ve done some interesting work together. And then Siri felt like we really needed to write a book together. She convinced me that we actually needed to do that. So it’s been a lot of fun working on Make Work Fairtogether with Siri.

Fuller: Well, I’m glad she did prevail on you, Iris, because I think it’s a very powerful and important book. I think one of the very interesting contentions in it is that fairness is actually a problem of design, and that creating workplace equality really relies more on, one could say, policies, procedures, execution, metrics, than simply a moral commitment to equality in the workforce. How did that idea emerge? And how have you gone about formulating it and testing it?

Chilazio: This is where that behavioral science perspective that Iris was just talking about comes in, because behavioral science has shown over and over again that, when it comes to the drivers of human behavior, often those policies or processes or even physical environments that surround us play a much bigger role than do our own desires or the knowledge and information that we have. But often in workplaces, historically, we focused on trying to fix individuals, rather than trying to fix the system. And so, to us, that’s what Make Work Fairreally is about. It’s about ensuring that all people have an opportunity to enter the workplace, and then do their best work while there. Of course, not everybody’s going to become the CEO. People are not equally talented. They don’t have the same priorities, the same skills. But for the sake of our own organizations, the highest possible performance for our companies, we need to make sure that we’re getting the best out of all the people that we have working for us.

Fuller: Describe in the book this notion of a toolbox that can be adapted to different processes and problems facing a company, as it tries to maximize its capacity to tap into the full range of talents available and its employee base. And that’s a superior approach, having that available to having hard targets or other metrics. Could you elaborate a little bit about the evolution of that concept?

Bohnet: Yeah. In many ways we feel that organizations, whether that’s private companies, or public sector organizations, NGOs, have to decide whether they want to get from A to B, from A to C, A to D. So we’re not really telling them where to go. The toolbox has lots of different tools that go beyond, for example, setting specific targets. But my focus is on how we hire, how we promote, how we do performance appraisals, even how we run our meetings. So it definitely also goes into cultural questions. Whatever the target, the goal that a company wants to meet, there are these different tools that they can use. And then, organizations in different jurisdictions might want to use different types of tools. In some jurisdictions, maybe certain tools are no longer compatible with the law, while in others, they are required by the law. I’m thinking the E.U. now requiring enormous disclosure in terms of data representation, pay gaps, when that is less of a topic right now in the United States.

Fuller: Within a specific individual enterprise, should managers have the ability to use a different combination of tools based on the specifics of their responsibility to workforce? Or is this something that’s better governed and established at the level of the employer across the board?

Chilazi: It’s a great question. I am thinking, the first thing that comes to mind is flexible work, and how Covid taught us, and research shows over and over again that there’s often no one-size-fits-all solution. The larger the enterprise, the more people you have working in different types of roles, that have different requirements, different activities that they comprise of. And so this idea that we can come up with a one-size-fits-all policy for when and where people should work is just not realistic. And, in fact, research shows that, when we allow people to self-select into the types of working arrangements that work best for them, that’s actually when we get the best performance out of our employees. So I think that principle of some combination of guidance from the top, but also flexibility in local implementation, so to speak, is important. But there is something to be said for having an organization-wide set of principles and direction. And sometimes that could involve things like goals and targets, or sometimes it could involve just a commitment to making sure that we are evaluating all our processes around, for example, hiring, performance evaluations, and promotions, and making sure that they are actually fair for all employees. Because what we also don’t want to wind up is a situation where some pockets of the organization have done a great job of creating best-in-class objective de-biased processes, while other parts of the organization remain a hotbed of bias.

Bohnet: Siri, I completely agree with you, but I might want to add an example of where company-wide policies might make a lot of sense. So Siri and I are very much committed to evidence. So mostly what we recommend is based on a randomized control trial, that means an A-B test that was done in an actual organization, where we evaluated what works and what doesn’t work. So think hiring, for example. We do know that unstructured interviews, where we just talk about your hobbies and wherever the discussion takes us just are not good predictors of future performance. So that’s a company-wide insight, that we just should not do unstructured interviews, independent of where you’re at. So I think the secret sauce, really, is to combine both the bottom-up approach, very targeted specific needs of a department, and the top-down approach that are relevant across the organization.

Fuller: When we think back into earlier versions of interventions like this, I think I would characterize most of those interventions as one size fits all. We’re going to have training for everybody, and everyone will do the training, and the training will be uniform, and that’s going to solve the problem. Or are we moving to a more sophisticated, data-informed point of view about how to do these things?

Chilazi: I think we hopefully are moving into a much more data-informed way of doing these things. There’s another thing that characterized some of those earlier approaches to fairness in organizations, which I think is actually an even bigger contributor to why they weren’t as successful as we would’ve hoped for them to be. And that’s the structuring of these interventions as programs, as something separate from the everyday flow of work. So, Joe, you mentioned trainings, that’s an excellent example. You’re going to your meetings, you’re meeting with clients, you’re designing products, you’re doing all the stuff that you need to do to move your business forward. And then, at the end of the day, there’s the ping that says, “Oh, and now you have to sit through a one-hour module of training.” So it’s extra. It’s on top of your daily responsibilities. And often, it’s delivered at random times. You might get a prompt to de-bias your hiring in June, but the next time you’re actually hiring someone is in January. Often, the content of these trainings is not related to the work that you are doing on an everyday basis. So part of Make Work Fairis shifting that paradigm, and to instead integrating efforts to create fairness into all the things that we’re already doing. So you’re going to sit in meetings and run meetings; we’ll run them better. You’re going to make performance evaluations, create ratings, do write-ups for your employees; do that in a more objective manner. Actually, Iris and I, with some colleagues, had the opportunity to partner with a large global telecommunications and engineering firm, with more than 100,000 employees in 100 countries, to test a next generation, if you will, of behavioral science-inspired diversity training, which was really just the seven-minute video that managers were invited to watch once they had raised a requisition for an open role, but before they’d gained access to review the submitted applications. And our hope was that this video, which was very timely, the content of it was targeted to the hiring decision, that it would actually shift who did, indeed, get shortlisted and hired. And that’s what we found. Managers were more likely to shortlist women and significantly more likely to shortlist and hire non-national candidates—so people whose nationality was different from the country of the job that they were applying to. And I think that’s a great example of this kind of mainstreaming, right, is it was a small tweak to the hiring process. We just asked managers to watch a quick seven-minute video, but otherwise, keep doing exactly what they were doing. And such a small intervention turned out to have a measurable and meaningful impact on who actually got hired.

Fuller: But, of course, also, makes the clear line of sight to better business results, better operating results evident to someone who’s changed their practice but is not doing it divorced from the day-to-day content of their work. Let’s go back to the metaphor of the toolkit it. What are the tools that you think should be taken out of the kit most regularly or most effective? What’s the claw hammer? What’s the Phillips head screwdriver? What’s the saw or file? You alluded to not having unstructured interviews. What are three or four of the other top practices that you think are, if not universally applicable, always have to be considered?

Bohnet: We would always suggest starting with data, understanding what’s happening in your organization, and then addressing what actually needs fixing. Because that’s another thing that Siri and I found in our earlier work, is that many organizations are just throwing money at the problem without really identifying what’s broken. While an increasing number of organizations are moving in the direction of using data to diagnose, much better to do a pilot, do some A-B testing, and see what works. So just to give an example, I worked with an investment bank a few years ago, and they approached me asking me whether they should change to a particular tool that I knew about, was an AI-based tool. And I said, “What you should be doing is, you should run all of your first-year candidates through the process that you normally use. And then in parallel, also have them go through the AI process. And then at the end, you can compare. Then you see who you would have hired based on your traditional process and compare to the AI process. And maybe you like what you see, and maybe you don’t.” And that’s actually what they ended up doing. And then there are lots of, then, tools which are very specific to the different decisions that people have to make. And that can be related to job advertisements. Maybe we can give you a tool on how to de-bias the language in your job ads so that you can benefit from 100 percent of the talent pool. It can be around screening—that could be an AI-based tool that then could be this last stage of the hiring process, where you might do a skills-based assessment or an interview or some other, for example, test. And in all of those stages, we have these very concrete tools. So these are not generic hammers, in the sense, just to go with the analogy, that you use widely, but very, very specialized screws and little things that might work in that particular context. And then you can go through the whole talent management, of course, then we can talk about performance appraisals, we can talk about promotion processes, and then some more of the cultural aspects. So this third bucket, this middle bucket, is really about embedding these changes into your talent management across the board. And then the last one is about culture. And Siri and I spent quite a bit of time thinking about norms—the power of social norms in organizations, and how organizations, in fact, can better benefit from the norms that are present, but also, importantly, help shape those norms so that employees know where we are actually going collectively. So it’s really data, and it’s this embedding of the insights into your processes, and then, finally, it’s shaping the norms that we want to collectively uphold.

Fuller: It’s certainly been surprising to me in our research how both little emphasis is placed on gathering data. Very few companies do exit interviews, by the way, or interview people that turn down offers. People that decline promotions, they just, “Oh, well, they just declined, and we’ll go on to the next candidate,” and don’t get curious about it. And it’s also curious to me how Balkanized the data is in most institutions, that HR sits on subsets of data, even though they administer the performance management system indirectly, the real performance data lives in what I’ll just call “operations,” and very little investment made to correlate things like people promoted rapidly, what are their profiles, and how that might inform what candidates we’re searching for, or things like that. Now, technology, in my estimation, has a lot of potential to break down those silos and really give the type of integrated sense of the state of the workforce, the state of the candidate pool, say the skills inventory, present that to management in ways that were not possible even five years ago. What’s your point of view about how technology is going to affect these decision-making processes going forward?

Chilazi: We talk a lot about how to harness data as an engine for change. We’re asking organizations and individuals to approach fairness exactly the same way as they approach their “core business.” You’d never dream of launching a new product without having a deadline for launch, without having sales projections, market share targets, profitability targets. You’d do user testing along the way to make sure that the product is working as intended, right? So you’d be using data and technology in a very regimented way to ensure the success of your product launch. And then once that product is out in the market, you’d obsessively continue to collect data, to make it better, and to make sure that it meets all the targets. So this is exactly the same way that we should approach managing our people, developing our people, growing them, and attracting them in the first place. So technology has huge potential to help us, and allow us to do things at greater scale and faster than we have before. But at the end of the day, at least for now, it’s still humans making decisions about how and where to deploy that technology and what analysis should be run and what decisions should be taken based on what the data says.

Bohnet: And in some ways, some technology outperforms humans. And in some other instances, humans can outperform technology. AI, certainly, is affecting everything that we do, how we teach, of course, but also how we do our work, and is increasingly used in HR. Primarily, right now, predictive analytics is used at the screening stage, where many, many organizations—in fact most of the Fortune 500—now use a screening algorithm to look at applicant CVs, and then decide who to invite to the next stage of the hiring process. Now, an AI can be wonderful in many ways. It doesn’t fall prey to some of the issues that humans fall prey to. Namely, it doesn’t get tired. It will give application No. 575 the same time and scrutiny as it does to application No. 5. And we do know that humans don’t. But at the same time, of course, AI can be biased. The data that feed the algorithm, that the algorithm is based on, that goes into the machine learning, is incredibly important. And, in fact, research suggests it’s more important than people’s backgrounds on the design team. Data is everything. Then make sure we test the algorithm before it is unleashed onto the world and adjusted accordingly if we find a disparate impact, for example, based on people’s backgrounds. If we get the algorithm right, we got a lot right. Right? We don’t have, then, to work with thousands or millions of people around the world, and trying to affect their individual decision making, but we can actually give them a tool. And an AI is just another tool that we can use. Now, of course, AI isn’t the only solution, and that’s why I think Siri focused on this combination of human decision making and assisted decision making. The human should be in the driver’s seat.

Fuller: So you mentioned earlier, Iris, that there’s a real divergence now between the regulatory and business environment in the United States, versus the EU. And I would expand that to say that Asian markets were quite distinctive relative to the traditional core OECD [Organization for Economic Co-operation and Development] countries before the emergence of revisionism, if you will, in markets like the United States. What does that mean for global enterprises? And what are the implications of these different evolutionary paths we see, the employers’ environments, and their commitments to such things as fairness? How’s that all going to play out, do you think? What are your scenarios?

Bohnet: So I was actually just in India. I talked to some of the big conglomerates in the country, and I was quite amazed in how targeted they are on evidence-based decision making and people management. So they are laser focused, maybe the way some organizations in other parts of the world were a couple of years ago. So, yes, we’re seeing these different kind of dynamics play out in different jurisdictions. And, of course, every organization has to comply with the law and now faces the challenge that they do business in India and in the EU and in the United States and might face quite different legal restrictions. So, first of all, I should say, Siri and I are trying to make it very clear that we are not lawyers, so we’re not giving legal advice. But some tools seem to be relevant across jurisdictions. And even the question of making work fair seems rather generalizable, in the sense that we cannot have meritocracy if we haven’t established fairness first. So that concept, actually, has then resonated across jurisdictions.

Fuller: And what are those three or four practices that you think seem to be worth pursuing universally?

Bohnet: So certainly, data collection, and trying to understand what works, what doesn’t work. As you said before, Joe, for example, simple correlations such as, well, we use these five different assessment tools to hire people, and then five years from now, 10 years from now, correlating them with these people’s performance might be helping us to understand which of the tools that we use are actually predictive of future performance. We recently worked with a financial services company headquartered in the United States, actually, but working across the globe. And they were interested in thinking about the performance-appraisal process. Many organizations invite employees to share their self-evaluations with their managers before managers make up their mind. Now, this sounds like a very democratic, inclusive process, and that’s also how it came about, in fact. As it turns out, of course, people are influenced by expectations, by cultural norms. Maybe in some parts of the world, shining the light on yourself is not the thing to do, while in others, it’s just much more common. So my self-evaluation, of course, will influence my manager, because my manager takes this as one of the inputs that they consider, and they might have a curve to fit. So we’ve been concerned about that social-influence channel that actually has nothing to do with actual performance. So that’s what this financial services company was interested in exploring. And what made it easy for us was that it had a glitch in its system in one year where they couldn’t share self-evaluations. And for a researcher, that’s almost finding nirvana, in that, you’re like, “Wow, this is almost a real experiment.” So anyway, what they found, particularly for newly hired people who hadn’t been in the firm beforehand, that not sharing self-evaluations really broke that social-influence channel between my self-evaluations and my manager’s assessment and, in fact, really leveled the playing field across different groups. So it’s that kind of specificity, that Siri and I would say we need in organizations. Start with your data, identify patterns that you’re unsure about, that you’re worried about, and then go back to the evidence and fix them. And so I think the generalizable insight here across jurisdictions is really that we need to go into our practices and procedures, and as Siri said, not worry about programs, but really build fairness into everything that we do.

Chilazi: I would add, as we go in to examine our current processes and practices, to not assume that just because we’re doing something today, that it’s the right way or the perfect way. We humans have this strong tendency toward status-quo bias. But resumes are a great example that remind us that no design is neutral. So most resumes today are structured in the format where you list your past work experience with specific dates attached. So something, 2010 to 2020, and then 2023 onwards. Well, this format makes it very clear if people have gaps on their resumes, noncontinuous work histories for any reason. I know, Joe, you’ve talked about this a lot and written about it as well. We know empirically that employers still penalize candidates with non-continuous work histories, even though there’s no evidence to suggest that people with gaps are in any way less competent. So this is a design that we live with that is not neutral, because it disproportionately advantages folks with continuous work histories and disadvantages people with gaps. Some colleagues of ours ran a wonderful experiment in the U.K. that showed that a very small tweak to the formatting of the resume can actually level the playing field in this regard. And they switched from expressing past work experience in terms of specific dates to just the total amount of time, the number of years. So 10 years in role X, three years in role Y. It doesn’t actually lose any information that’s relevant to assessing a person’s competence or merit. You still see what they did, for how long, and what skills they gained. Let’s take another example of meetings. Do we have a hybrid meeting? Is everybody virtual? Is everybody in person? If we’re in person, are we sitting around a rectangular table with some people sitting at the head of the table? Or maybe some people sitting around an inner ring and others sitting around an outer ring? These are all design choices that influence the dynamics of who’s going to be speaking in this meeting, whose voice is going to be heard, whose contributions are going to get more attention than others. So it’s really important to always have that questioning mindset when we examine how we’re doing work today.

Fuller: Let’s turn to a couple of issues that have been prominent in the United States in the last year to two years and your thoughts about them. The first is skills-based hiring. Skills-based hiring got introduced into companies thinking, when it was a being pretty apparent that many jobs either preferred or stipulated holding a four-year university degree, even though there was no particular data that suggested that credential was actually helpful in doing the work. And there was often disconfirming data, in the form of successful incumbents in those jobs that didn’t have degrees but were, nonetheless, being productive. What’s your view about that movement? Our research here, my work with Matt Sigelman of Burning Glass Institute, specifically, says there’s been a very marginal impact by the move to skills-based hiring. Do you see it differently? How do you see it as a principle? How does it fit with your research?

Bohnet: We are big fans of work sample tests, and that might be something like skills-based assessment, where we actually have coders write some code, and then evaluate how well they do in their coding. And you might want to hire for emotional intelligence, as well. What we are concerned about is the unstructured nature of some of our hiring processes. So think beforehand about maybe the five different ways in which you want to assess talent. And that could be by a skills-based assessment test. That could also be by a structured interview. Maybe the resume could play some role, but it shouldn’t be the decisive factor. And think about those weights beforehand. Maybe you want to give a fifth of the final weight to each of your five different assessment methodologies, or maybe you want to give some particular feature a higher share of the final weight. Basically, create a plan on how you want to assess your employees, and use a combination of different tools.

Chilazi: And that applies, by the way, not just to entry-level hiring, but even to ongoing performance evaluation and promotion decisions even quite high up in the hierarchy. As we know, competence often gets confused with confidence, and employees are often promoted on their ability to advocate for themselves and pound the table on their own behalf and be very politically savvy, rather than, necessarily, always their skills on the job. So there is a lot of promise in centering skills as we’re assessing people all throughout their careers and the potential for that to level the playing field and make our assessments more objective and more fair.

Fuller: Let’s turn to a second one, which you alluded to a bit earlier, Siri, about imagine a hypothetical meeting with some people are there in person and some people are participating remotely. There’s been quite a lot of attention paid to what some people would portray as a reversion to history, back-to-work policies, abandonment of some of the flexibility around hybrid and remote work that was borne of the Covid pandemic. Thoughts about that?

Chilazi: Yes, what we are seeing in the research is that remote work and, generally, just more flexible modes of working can be a key leveler of the playing field, both because they allow more people to be present in the workplace: parents, women, people with longer commutes, people with disabilities. This is one where data shows that folks with disabilities have actually had a much greater opportunity to participate in the workforce under some remote and hybrid arrangements. But, also, it often allows people to do better work while also being more satisfied with their work lives and lives outside of work, which, of course, leads to greater retention. So it’s a win-win for both employees and employers. And one of the misconceptions out there that data can help us dispel is that remote work is only, for example, desirable for women or for people with young children. Well, yes, parents of young children are definitely one of the groups that’s most enthusiastic about remote work, but we actually see, between women and men, for example, roughly equal uptake and enthusiasm for remote work. I think it’s an important reminder for all of us that all people have lives outside of work and interests—whether it’s family or hobbies or side hustles or traveling. And when we make a little bit of room for that as employers, our employees are more likely to stay and keep doing good work for us.

Fuller: We’ve certainly seen in the United States an overt decision not to have public policy play an active role in advancing workplace fairness. What role do you see public policy productively playing, if not here, then elsewhere?

Bohnet: So maybe going back to artificial intelligence, the E.U. has passed so far the most far-reaching AI regulation, for example, requiring organizations to test their algorithms before they are unleashed. Now in the United States, we also have pockets where things along this thinking have actually happened. So New York was one of the early cities to work toward requiring software that is used in human research management to be tested beforehand. But maybe to go to a very different application, think pay. Pay still is a topic in almost every jurisdiction. I mean, Iceland, I think is very close to actually having gender pay equity between women and men. But most other countries still find themselves confronted with a gap between men’s and women’s earnings. And quite a number of states in the U.S. have passed a regulation that requires organizations to be more transparent about their pay. And that regulation comes in different shapes and forms, some in different states. But, for example, in Massachusetts, what companies—and other organizations as well—now are required to do is to disclose the pay range when they post a job. It turns out that that kind of transparency really does help decrease gaps. And so I’m quite optimistic that we will continue to experiment in different jurisdictions, with different types of policies. We are, in fact, quite keen on employers around the world to learn from pay transparency laws and decrease ambiguity in whatever they do. And that strikes me as another generalizable insight across jurisdictions, because if we leave things blank, people just fill in the blanks. And often, they fill in the blanks with stereotypes.

Fuller: When an organization is facing internal resistance toward making policy changes, what’s the most practical way for them to overcome that resistance?

Bohnet: Generally, we would suggest to start small. So don’t boil the ocean. That has lots of different advantages. The first one is that you will meet less resistance. You’re just saying, “Well, we’ll try something out,” like my investment bank. “We’ll try something out. We’ll keep the whole system going. We’re just trying to learn.” So that’s the first advantage. You, just by the nature of the intervention, meet less resistance. And then, secondly, you learn something. You actually know whether it worked in the pilot. And then, thirdly, often showing is more powerful than telling people what to do or what works. So there’s very good research that people can be very resistant to change, any change, until they’ve actually seen the new machine produce more widgets. And s,o we’re very big fans of small changes, piloting, and then demonstrating impact before rolling out, and even trying to get the big buy-in across the whole organization.

Fuller: Is there a specific metric that you think is the legitimate one to use for assessing that positive impact and, therefore, legitimizing it?

Chilazi: This is a great question, because this is where a lot of organizations stumble, is they don’t actually define what success looks like and how it can be measured before they set out to do a whole lot of activity. So I think it’ll really depend on the situation. Are you trying to increase people’s psychological safety in meetings? Do you want to increase the diversity in your hiring? Do you want to close gender gaps in performance evaluation scores? Right? Depending on what your issue is, you might track different data points to see if you’re actually making progress. But the real take-home message here is, as Iris said earlier, use data first to diagnose what’s wrong. Where is there a gap? Where is there an opportunity for improvement? And then, before you start piloting, testing, learning, make sure you have your metrics lined up that can tell you, are these things moving the needle in the right direction or not?

Fuller: Well, Iris and Siri, one last question. What can we expect to see from the Harvard Kennedy School Women and Public Policy Program in the near future?

Bohnet: Well, lots of different things. So certainly, that program is larger than Siri’s and my work. We also focus on women and international relations, women and security. We focus on gender negotiation. We focus on women and politics, gender equity in politics. And so Siri’s and my particular expertise, of course, is the workplace. And we are focusing on equalizing the workplace. And so, I think what you’ll see more from us is working on that toolbox, really making sure that that toolbox becomes accessible to practitioners around the world. And so that was the impetus for the book. But we can push, of course, the book now even further and think about bots, for example. Can we develop a bot that will, in fact, make those tools available at a fingertip, quite literally? And that’s then, I think, something we’ll try to work more on during the second half of this year, and hopefully then test it with organizations to see whether those types of decision aids might actually make a difference for organizations.

Chilazi: Yeah, we always learn so much from talking with practitioners. I’m personally very excited for a lot more research and hopefully, in partnership with many real-world companies that are willing to learn, not just for themselves, on what works and what doesn’t, but then are willing to share those insights with the rest of the world.

Fuller: Well, we’ll look forward to some additional research and writing. I certainly commend Make Work Fairas a book to our listeners and want to thank, Iris, you and Siri for joining us on the Managing the Future of Work podcast.

Bohnet: Thank you, Joe. It’s been such a pleasure.

Chilazi: Thank you so much for having us.

Fuller: We hope you enjoy the Managing the Future of Work podcast. If you haven’t already, please subscribe and rate the show wherever you get your podcasts. You can find out more about the Managing the Future of Work Project at our website hbs.edu/managingthefutureofwork. While you’re there, sign up for our newsletter.

SUBSCRIBE ON iTUNES
ǁ
Campus Map
Managing the Future of Work
Manjari Raman
Program Director & Senior Researcher
Harvard Business School
Boston, MA 02163
Phone: 1.617.495.6288
Email: mraman+hbs.edu
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Accessibility
  • Digital Accessibility
Copyright © President & Fellows of Harvard College.