Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Managing the Future of Work
  • Newsletter
  • Partners
  • About the Project
  • Research
  • Faculty & Researchers
  • Media Coverage
  • Podcast
  • …→
  • Harvard Business School→
  • Managing The Future of Work→
  • Podcast→

Podcast

Podcast

Harvard Business School Professors Bill Kerr and Joe Fuller talk to leaders grappling with the forces reshaping the nature of work.
SUBSCRIBE ON iTUNES
  • 23 Sep 2020
  • Managing the Future of Work

How brain games and AI can improve HR

Can neuroscience and AI improve on traditional approaches to hiring and evaluating workers? Pymetrics’ co-founder and CEO, Frida Polli, argues that the combination is necessary to overcome inherently biased human judgement and to bring empirical rigor to the task of matching talent to fast-changing job categories. The neuroscientist-turned-Harvard MBA shares her journey and explains how her company helps the likes of Unilever, LinkedIn, and Accenture factor workers’ cognitive, social, and emotional aptitudes in their personnel decisions.

Joe Fuller: HR is an inexact science at best. It’s hampered by subjectivity, bias, and the selective exchange of information. The changing nature of work demands a new approach. Welcome to the Managing the Future of Work podcast. I’m your host, Harvard Business School professor and visiting fellow at the American Enterprise Institute, Joe Fuller. My guest today is neuroscientist turned entrepreneur, Frida Polli. She believes hiring and personnel management practices can benefit from a combination of games-based neurological assessments and artificial intelligence. In 2013 Frida co-founded pymetrics, an employment screening and workforce assessment firm. Clients like Unilever, LinkedIn, Boston Consulting Group, MasterCard, and Accenture have signed on to this AI-augmented approach to human resource management. Pymetrics believes this approach promotes a more meritocratic workforce by better matching specific candidates to jobs and by encouraging diversity. Critics, however, worry about the potential for AI to be biased and about the misuse of sensitive personal data. Polli contends that the changes wrought by the Covid-19 pandemic and the imperative to address systemic racism only increase the need for innovative approaches in HR, rooted in neuroscience and artificial intelligence. Frida, welcome back to HBS.

Frida Polli: Thank you, Joe. Pleasure to be here.

Fuller: Well, Frida, you're kind of a fascinating case study of a Harvard Business School student, because you came to our school as a neuroscientist. Tell us about your journey from being literally a brain scientist to getting an MBA and starting pymetrics.

Polli: I spent 10 years as an academic neuroscientist in the Harvard MIT ecosystem and really just enjoying it tremendously. And then I think what changed for me, was realizing that I wanted to use what we were learning in a way that would be more applied and helpful. And so at that point, actually one of my mentors, Steve Hyman, who was the former provost of Harvard said, "Oh, you know, maybe you want to consider an MBA." The more I looked into it, the more I thought, "This is a good way to go." And so I came to HBS thinking I wanted to use something about what we had learned about the human condition, the human psychology to really tackle a big problem. And, you know, you have tons and tons of companies coming on campus. And it's really, it's just a matchmaking process for two years. And watching that just made me think, "Wow, it hasn't really evolved much since I was an undergrad way back when." And we know so much more about people—that fundamentally, they have cognitive, social, and emotional aptitudes. And we also know that this cool thing called data science has come around and really facilitated sort of a much more efficient, automated matching system, probabilistic matching system. And then we have all these technology platforms out there that are essentially doing sort of machine learning at scale and helping people find everything from toilet paper, to movies, to books. And I was like, "Why doesn't something like this exist for career search, because it's so critical?" And I was watching all these students who were using sort of the tools at hand, which weren't many, to try to figure out what they wanted to be. And then three days into their internships being like, "I hate my terrible job.” And so pymetrics came out of a strong desire (a) to help myself, because I was a career switcher, but also to help all these, really students, who were trying to figure out, "What should I do next in life?" And so again, it really was born out of watching this process and thinking, "Wow, Harvard students are probably the most overserved population on the planet when it comes to job opportunities." And I was like, "Well, if it's not working here, it certainly must have a lot more problems elsewhere." And that's where the desire for pymetrics came to be. It was like helping both, students and companies really make that matchmaking process work better for everybody.

Fuller: So talk about what pymetrics does.

Polli: Pymetrics uses cognitive, social and emotional aptitudes in people. We've developed behavioral activities. So instead of a questionnaire or something like that, we actually put people through behavioral exercises on the computer to figure out more granular information about their cognitive style, your planning style, your attentional style. On the social and emotional side, it's how risk-taking are you? What types of reward motivate you, and so on, right? And these were developed for research purposes, but they can easily be applied to other things. And that's what we ended up applying to the field of human capital. So those are the attributes that we collect, and we collect them by putting people through a series of behavioral games. And then we take that data and we essentially use that exact same assay to understand when people are performing well in a number of different roles, what aptitudes are they exhibiting? So before we do the matchmaking, we also collect data on the company's side, and we say, "Okay, people that are in marketing, versus HR, versus finance, what attributes are they exhibiting that are different?" And that's how the matchmaking process works. It's basically saying, "Okay, applicant A over here, seems to have a profile that looks more like HR. Applicant B has a profile that looks more like finance." And it's really facilitating and making that search process optimized and efficient on both sides. So it's anywhere you need to understand the fit of a person to a role or a job. And that technology can be used in hiring, obviously; that's kind of where we started out. But it can also be used in mobility. It can be used in re-skilling, which is obviously top of mind for everybody now, because entire industries now—because of Covid—are in decline. And yet, new industries or emerging industries are burgeoning. And so how do you help people that were airline pilots, for example, figure out, "Hey, what else should you do now that planes aren't flying as often?" It doesn't mean that you have become irrelevant, it just means that your hard skillset is no longer maybe as needed, but your soft skill set, which is what cognitive, social and emotional aptitudes are—they're your soft skills—remain just as relevant as they were before. It's helping you use your soft skill profile to figure out the new hard skills that you should be developing, given the current environment. So we really get used for a variety of different human capital processes.

Fuller: Frida, you talked about using games to test these aptitudes and skills. How do you avoid people gaming the system—gaming the game? Because certainly the traditional hiring process, job search process, is one that's just fraught with all sorts of manipulation by both, the potential employer and the job seeker.

Polli: So one very simple way that these games can't be gamed, is there are two reasons, really. One is that they're capturing your behavior. Behavior is far harder to alter or to cheat on, so to speak, than your answers. Whenever you're measuring behavior, it's just much harder to game that, right? The second thing I would say, is that even if you were to alter your behavior in a particular way, the real reason that they're hard to game is because there is no one way to play the games that is correct. And so really it all comes down to, it's much more like the Myers-Briggs. However you respond could be well-suited, it just depends on the role. The third thing, and this is where we don't have much control over it, but we're trying to sort of change perception, to change the mindset from, "This job is something that I have to do everything in my power to get, and to force myself, this round peg into a square hole," versus thinking, "If I show up as I am, that is what's actually going to yield the best outcome." And that's the mind shift that I think we have to all move towards as a society. And I think we have to start shifting employment in that direction as well. Meaning that it's really bringing your true self, your whole self to that sphere. And that, I think is what's going to yield job success and happiness.

Fuller: That must occasionally create some awkward situations where, in fact, you're telling people that they're not cut out for something that's been a lifelong ambition.

Polli: We have probably 100 different finance models that we've built, and we could even have 10 different investment banking models. And it's very unlikely that someone is going to not be a fit for something that they have sort of yenned after their whole life, it just might be that, "Hey, you're a better fit over here than over here.” There's no one prototype of somebody who's in finance, there's hundreds of prototypes. And so you will find your fit if that's been your dream job and your dream role. The other thing to keep in mind is that we are one of a number of different data points that people are considering in terms of making those decisions. Actually, I think, opens up a lot of doors and opportunities for folks.

Fuller: What's the business case for a company to use pymetrics for those processes? Because that all sounds fine, but is the difference, the impact that much greater that it's worth changing out the way you've run these processes in the past, that have at least made you successful enough so you're still hiring and you're still re-skilling and you're still promoting people?

Polli: I would equate traditional human capital processes very much to what we used to do when we ... Do you remember travel agents? Do you remember going to a travel agent's physical...

Fuller: Sure.

Polli: ... a travel agent and saying, "Do you have an itinerary that goes to this location?" And they would scan their system and be like, "Yes, no." And then you have to walk down the street. It's not that we fundamentally changed the processes, we've just made it technology-enabled. And so what are the types of benefits people see? They see a much greater ability to find talent in a more efficient, and also effective way. Meaning, a lot of people are trying to broaden the aperture, looking at talent from everywhere, not just where I've shone my spotlight. Diversity and wanting to get candidates that don't just look like the ones that I've hired, is top of mind for folks. And we have developed a process whereby we audit in real-time all of our algorithms, and so we can ensure that there is no gender and ethnic bias in the process. And that's massive, right, because we know from decades’ worth of studies that, unfortunately, human processes are unavoidably biased—unconscious bias is a thing. Daniel Kahneman's System 1 Thinking. It's just unavoidable that in our human processes, we're going to have that bias. And then there's the candidate experience. If someone typically applies for a job and for whatever reason they don't get that job, that's the end of the road, versus with pymetrics, what we can do is we can actually reroute them to other jobs at that company, or even other jobs at partner companies where they could be a good fit. So, even from the candidate experience, it yields better results. I mean, again, across all of those areas, people have not only cost saving—that's table stakes—but it's really about finding better candidates that are better suited, more likely to stick around. We see increases in retention, increases in performance, pretty significant increases in gender, ethnic, socioeconomic diversity, age diversity.

Fuller: You're getting a lot of insight about individuals, their makeup, their preferences, presumably the type of data you gather at any stage in their career where pymetrics tools are being used, could allow you to gain other types of insights about them by correlation or transitivity. Who owns the data? And how do you ensure that it doesn't get somehow used in a perverse way by either an individual or a company?

Polli: Even as a teeny tiny company, when we were much smaller than we are now, we really prioritized data privacy and security for the reasons that you've just described. But I think the other thing that's relatively unique about pymetrics in this regard, is that the person who goes through pymetrics owns their data. So at any point, if I don't want my data to exist on the pymetrics system, I have the ability to actually just go and delete it. It also allows for portability, right?. So if I go through pymetrics for an application process for company A, but then I'm asked to do so for company B, I can actually just take my data and say, "Okay, here, I want to use it over here," rather than, "That data was owned by the company I applied to."

Fuller: It's pretty straightforward to understand how a job that has existed for a long time, you'd have enough data to draw on in terms of what were the attributes of people who succeeded at it. What do you do about an emerging role, because, certainly, with the type of dynamism we're seeing in digitalization and sciences like artificial intelligence, a lot of roles are—their half life is short and dynamism quotient's high.

Polli: We can do a number of different things. First, we can very quickly benchmark on people that are successful in those roles, even if they don't have a long history. So let's say you have people that have been in those roles for three months, six months, we can start to build a profile of what that looks like very quickly and using your own folks, so that you have some comfort that they're the type of people that you would want to find. That's one option. The other option, and this happens often is, people come to us and say, "Oh, I don't have any data scientists, but I'd like to hire some," or whatever the emerging field is. We can actually use industry benchmark data at that point to start them off. We call them starter models. And then as they start to build that capacity in-house, we would then switch them over to their own custom, unique models. We recently started partnering with a platform called Burning Glass. Burning Glass has massive data on skills. And what we're doing is we're actually linking skills data at a very granular level to pymetrics's aptitude or soft skill data. And so what you can imagine as a future state where you could actually look at it at the skill level and say, "This skill right here, what are the aptitudes that are typically associated with this?" And so you actually build almost a profile by taking all of these different skills and then saying, "Here are the aptitudinal correlates to this and let's find people like that." So it's actually yielded some really, really exciting modularity into the system that I think will really help address this shift that we see everywhere of people not wanting to understand a job, but really wanting to understand at a much more granular level what are the activities that make up the job and how can those be decomposed and recomposed in exciting ways. And I think that's really what your question gets to, is how do we build a system that allows us to capture that?

Fuller: Could you also just talk a little bit more about re-skilling, because that's the further expression of this, you've got incumbents whose jobs are being changed so fast. To what extent do you find that people's prior capabilities in the job allow them to take on necessary new skills, or do they end up being redirected to something that's got more in common with what they have been doing than the way their current job was going to evolve?

Polli: If you combine hard skills and soft skills, which is what we're doing increasingly, I think that's when you're able to answer those questions in a much more data-driven way. So what I mean by that is, so for example, this platform that I was mentioning, Burning Glass, has done all sorts of work looking at essentially proximal roles when it comes to skills. So, they may not be exactly what you'd think of, but their skills proximal. That's fascinating. Then you can also think of soft skill proximal skills, where ... And I'll give you a perfect example of this. We looked at pilots recently for a study that we're doing with a number of academics. And what we found is that pilots actually have a lot of soft skill overlap with engineers, but not as much hard skill overlap. But you can imagine the type of thought processes that they bring to being a pilot actually are very similar. So that's a non-skills proximal, but aptitude or soft skill proximal match, where therefore, then the next obvious thing is like, "Okay, and then let's hook them up with a class in Python." Then that takes you to a personalized learning pathway. And that's what we're doing increasingly in terms of re-skilling.

Fuller: Yes, we're very familiar with Burning Glass here at the [Managing the] Future of Work project. They have been very gracious in letting us use their data for a lot of our research, and as you've said, they've done some very interesting work, specifically that work with The World Economic Forum and Boston Consulting Group on skills adjacencies. And, happily, the CEO is a Harvard Business School graduate, so there you go. But let's talk a little bit about Covid-19 and how that's going to affect these processes and what you've seen with your clients. We're talking in early August 2020, and we're beginning to settle into, not a new normal, but at least the next normal in terms of everything from the nature of work to the nature of demand, what jobs are going to continue to be hired for. What are you seeing, and how do you think this is going to manifest itself in terms of both the tools you offer your customers and how they go about managing these processes?

Polli: We've seen a number of different trends that I think are relevant. So one is that, while hiring may have, in a bunch of different industries, slowed down or halted, the need for these companies to respond to the new normal has not changed. So now they're basically turning much more to internal mobility. So it doesn't change the demand for pymetrics, it just changes whether you're evaluating external people versus internal people. And we've seen that with a lot of our clients. And then the other thing that I would say is that not only is that happening within companies, but it's also just happening at the macro level, where there are entire industries that are basically having to rethink their fundamental business model to some extent. I think everyone's business models are being rethought. And I still believe there's core value to most businesses out there, but the delivery mechanism and how they're getting paid and everything is changing dramatically. And so again, back to if you're trying to understand the fit of a person to a job while all of these jobs are actually fundamentally shifting, I think technology is critical in this regard, much more so than the manual process, which just really can't keep up with all of this data that's coming at us. And Covid-19 has really precipitated digital transformation in a way that was not happening before because, again, think about it, there are a lot of legacy processes. Think of the job fair. How about if you took that and you made that a digital process and there were actually some matching involved in that? That's much higher value. And you could think of that in many different places, in many different types of business processes where the paper and pencil version, it just had a lot of legacy to it, but not as much value.

Fuller: Obviously another issue at the time we're talking is the real focus that's been paid to economic inequality, and particularly as that affects communities of color and the African American community. How can tools like pymetrics change that outcome and help companies lower those barriers? Because when I talk to companies, many of them are distinctly saying they're going to revisit their hiring practices but when you get beyond that in the conversation they don't really know what else to do. And if we believe that this economic inequality is an expression of systemic bias, then it's hard to imagine that challenges are going to merely show up in the hiring process. It's also in the onboarding process, training process, the advancement process.

Polli: Yeah. So, I mean there's a lot of great thinking on this, I would say, in the last year or two, including there's a great report that was put out in part by a woman named Iris Bohnet, who's at the Kennedy School at Harvard, in terms of what works. What are data-driven ways that have been shown to reduce bias? So again, back to Daniel Kahneman's book, Thinking, Fast and Slow, where he outlined System 1 and 2 thinking, I mean I always get frustrated when I hear that algorithms are the most scalable form of bias. I'm like, “no, actually the human brain is the ultimate bias machine.” Unfortunately, System 1 thinking is a hardwired design flaw that is at issue when someone's born. And so I think that we have to be realistic about the fact that all of this bias that we're seeing in AI and everything else, that really is the result of the human decision making process that is now being mirrored by these AI systems. But there's actually a lot of proof to say that an audited algorithmic system—and that's what pymetrics is—is actually far more capable of mitigating bias than the human brain is because, again, all of the data we have on unconscious bias training shows that it's, at best, ineffective and at worst can actually be harmful. And again, we have data now on over half a million people that have gone through our system that show that, compared to either resume review or cognitive testing—traditional cognitive testing is akin to IQ testing, but it's slightly different, but¬—we're 20 to 180 percent better at including people of color in whatever human capital process we're engaged in than is the resume review or cognitive testing, which is used by over 60 percent of companies. And I think the most important take-home from that is that human resume review is so flawed. There was this Proceedings of the National Academy of Science (PNAS) paper put out a few years ago where they looked at 30 years of what are called callback studies or audit studies. It's basically where you look at the impact of gender and race by just changing the name. So instead of John Williams, it's Jamal Washington. And just changing that, with the exact same resumes, so the credentials are identical, made John get 10 interviews and Jamal get 7. That is the problem with System 1 thinking. That is the ultimate design fault, the human brain. And again, it's not because we're bad people, it's because the System 1 thinking has developed to be fast. And when the tiger is running towards me, I don't want to have a thoughtful conversation about what I should do, I just want to react instantly. But the problem is that gets activated in all these places that's not helpful. And so that's where I think ... And then you go back to an audited system like pymetrics that is basically recommending 9 or 9 and a half people of color for every 10 Caucasians. And, again, we'd like it to be 10 and 10, but we're infinitely better than some of the systems that are out there, and I think that's the promise of audited technology is that we can get to that place of parity. And I think it's a hard pill for humans to swallow, because we're nice people, we care. And that was the crazy thing about this study is that this PNAS study showed no change in this discrimination, essentially, of people of color while, at the same time, people were espousing diversity as a value.

Fuller: So you started touching on something that's of real interest to me. If you look across rules and regulations that have evolved over time as it affects things like higher-ed or employment, there's an interesting question about how fit for purpose they are going forward. So for example, in the management of Title IV of the Higher Education Act—which is student loans and grants—all the logic of how the money is dispersed, and what it'll fund, and even the calendar by which it's dispersed, is rooted in the four year college experience of 1964, literally. The last time the definition of online education was updated is in the 1998 reauthorization of the Higher Education Act—so a little bit behind the times. In things like employment law, you have the four-fifths rule.

Polli: The four-fifths rule is, essentially—it basically states that any employment selection device must select no fewer than eight of one group for every ten of another group. And it's proportional to pipelines. So if you have ten of one group applying in a hundred, it's proportional to that pipeline, right? But essentially it means that the ratio has to be 0.8. And, again, it’s more nuanced than that, but essentially that’s where you’re going to go.

Fuller: And if you don't meet that standard consistently, the burden does fall on the employer to demonstrate.

Polli: So there is this carve-out called the business necessity [crosstalk], if you've heard of that. And that basically says, “okay, well if you don't meet the four-fifths standard, so long as you can prove business necessity, you could still be legal.” And so then the question becomes what is business necessity? And I think that becomes very, ... It's a pretty big loophole, right? And I think that's how these cognitive tests, which by the way, on average select only three to five people of color for every [10] Caucasians, and it's mainly because they index heavily on socioeconomic advantage. Sixty percent of companies use them. And we've had folks come up to us and say, "Well, we'd love to use pymetrics. It sounds awesome. I know you ensure that all of your algorithms are above 0.8 or more." Right? And again, I'm told the average is about 0.95. "However, have you ever been sued?" Proudly, I say, "No." And then they look at me with disdain and they say, "Well, that's too bad. Please come back to us when you've been sued and won," right? So this is, I think, indicative—to answer your question—of how is regulation helping or hurting. If I am more defensible after I've been sued—and by the way, you generally only get sued if you've actually shown bias. So, the idea is you're biased, but you've been sued and you've shown business necessity. So you can remain biased, but you're legally defensible in court. That's sort of more saleable to certain folks, than a technology that doesn't break the four-fifths rule, and therefore may never be taken to court, hopefully, at least not on that. I think that's the problem with existing regulation. It's sort of regulatory capture, right? I mean, you've basically made it easier for tools that have this need to rely on this business necessity clause to exist. I wouldn't say an advantage, but it's certainly sort of, they're just battle tested. And I think that's a challenge. And so we, pymetrics, are actually very supportive of certain initiatives that are happening at the local level right now, local and state level. The idea behind it is “let's have these tools report on the impact that they're having. Let's be transparent about the impact that these tools are having, regardless of what the tools are,” right? It's not an AI tool versus non-AI tool. It's any tool that's being used in an employment process, let's see what their impact is, particularly on communities of color. I think one of the things that's really interesting is in Massachusetts the Equal Pay Act basically said, “Let's look at your data on equal pay. And if you find a problem, let's fix it,” right? Whereas previously the problem to enacting equal pay is that everyone was really terrified of ever looking at that data because if they found something, then they would be in deep trouble. And we have to, in my opinion, and again, maybe it's a pipe dream, but I really believe we have to move towards that in employment.

Fuller: So Frida, I know this is still fairly early in pymetrics’ development history, but where do you see this whole field going? And what are your aspirations for both the company, and for the field more broadly?

Polli: Yeah, I mean, look, I think I want this field to facilitate the vision I had when I started pymetrics, back in the days of my HBS, was a place where finding your optimal career was as easy as finding anything else in this modern world. And the likelihood of that thing that you found being the best suited to you and the most satisfying was extremely high, right? Just imagine if the job seeking process for both sides of the equation was just far faster, much more likely to yield a great result, much more equitable towards people. That's where I think this field of people analytics, or whatever you want to call it, needs to move. We can't continue in the 21st century when it takes three months, six months, nine months, twelve months to fill a job. We have to make all of these processes more efficient, and also far more likely to yield a good outcome, and in a way that uses all the talent that we have. How can we live in a world where 90 percent of people are not even being considered because we're still relying on a manual process. So again, it's about making everything far more user-friendly for both sides, but the end goal is to make it more equitable and to yield a better outcome for everybody involved.

Fuller: Frida, thanks for joining us and sharing the story of pymetrics and the innovations you're pioneering in the application of cognitive science and human asset management. It's very intriguing, and we're really looking forward to seeing how things unfold in the future.

Polli: Joe, it was a pleasure to be here. I'm a huge fan of your podcast and just very excited to be able to contribute. So thank you so much.

Fuller: We hope you enjoy the Managing the Future of Work podcast. If you haven’t already, please subscribe and rate the show wherever you listen to podcasts. You can find out more about the Managing the Future of Work project at our website www.hbs.edu/managing-the-future-of-work/. While you’re there, sign up for our newsletter.

SUBSCRIBE ON iTUNES
ǁ
Campus Map
Managing the Future of Work
Manjari Raman
Program Director & Senior Researcher
Harvard Business School
Boston, MA 02163
Phone: 1.617.495.6288
Email: mraman+hbs.edu
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Accessibility
  • Digital Accessibility
Copyright © President & Fellows of Harvard College