Podcast
Podcast
- 20 Sep 2023
- Managing the Future of Work
The EEOC’s Keith Sonderling on job fairness in the age of AI
Joe Fuller: It’s just about impossible to discuss the future of work without pondering the influence of artificial intelligence. Will automation frustrate the ambitions of job seekers and perpetuate historical patterns of bias? Or will job seekers use generative AI to get past robotic gatekeepers? Will HR functions flourish or disappear? But the debate over the use of AI in hiring obscures the extent to which the technology is already being used in all aspects of employment. That raises the question of whether AI eliminates or amplifies the problems like bias and wage discrimination that have plagued the system in the past. Employers and regulators are just beginning to grapple with how to factor AI into the policing of employment discrimination and determining employers’ responsibilities. At the federal level the U.S. Equal Employment Opportunity Commission, or EEOC, is charged with enforcing civil rights laws in employment. As HR automation proliferates, how is the agency responding?
Welcome to the Managing the Future of Work podcast from Harvard Business School. I’m your host, Harvard Business School professor and non-resident senior fellow at the American Enterprise Institute, Joe Fuller. My guest today is EEOC Commissioner Keith Sonderling. Since joining the commission in 2020, Keith has taken a special interest in AI and related technologies. We’ll weigh the potential benefits and drawbacks of HR automation, and we’ll talk about approaches to regulating the technology in the U.S. and internationally. We’ll also consider the EEOC’s guidance to employers and the need for close government-business cooperation in the future. And, amid concerns about privacy, we’ll talk about why it’s important, even beneficial, for companies to responsibly gather data throughout the AI-enabled employment lifecycle. Welcome to the Managing the Future of Work podcast, Keith.
Keith Sonderling: Thank you for having me.
Fuller: Keith, you are a Commissioner at the EEOC. How is it that you find yourself in that type of position?
Sonderling: Well, I was a labor and employment lawyer in Florida. I was a management-side attorney defending companies against government investigations like the EEOC and the Department of Labor. And in 2017, I decided to move to Washington, D.C., and join the U.S. Department of Labor at the Wage and Hour Division. The next thing I knew, I was the acting and deputy administrator at the division, which handles overtime, minimum wage, but broader issues really affecting the workforce, like independent contractor, versus employer status, joint employer, and a lot of the big issues moving forward on that side that’s really impacting today’s workforce economy. So I was able to really get involved in that and do a lot of really innovative things when it came to independent contractor status, when it came to joint employer. And then I was nominated to be a Commissioner at the U.S. Equal Employment Opportunity Commission, which for a labor and employment lawyer like myself, is the crown jewel. I constantly say that we are the premier civil rights law enforcement agency in the world, because I’m confident we are. This agency was created out of Martin Luther King marching in Washington, D.C., in the 1960s, which led to Title VII of the Civil Rights Act of 1964. So it is a tremendously important agency that really adapts with the workforce, adapts with what is going on in the economy.
Fuller: So, Keith, I think a lot of people have certainly heard the acronym EEOC, and some may even know it stands for the Equal Employment Opportunity Commission. But could you give us a quick primer of the EEOC and what it does?
Sonderling: So for those of you who aren’t aware, the EEOC is the agency that deals with, really, the big-ticket labor and employment issues. So when you think about the #MeToo movement, when you think about pay equity, when you think about diversity, equity, inclusion programs, all types of discrimination against protected characteristics, such as your sex, age, religion, disability, pregnancy, genetic information—really, the larger issues that HR professionals deal with on a daily basis. Our agency encompasses, really, all the terms and conditions of your employment. So everything from, of course, hiring, firing, promotion, wages, training, benefits—that is covered under federal anti-discrimination law. So we have a very large mandate. And something unique about our agency and the way civil rights laws in the workplace were enacted is that, in the United States—and a lot of people don’t realize this—you cannot sue your employer for discrimination, whether you work for a private sector company, state or local government, or the federal government, without coming to the EEOC first. So we literally see every case of employment discrimination nationwide, no matter who you work for. And it puts us in a really good position to really look at the future, look at the future of work, see where it’s going, from not only a discrimination perspective, but an equal employment opportunity perspective.
Fuller: So, Keith, one thing that’s very interesting, of course, is the emergence of artificial intelligence, specifically generative AI—things like ChatGPT—and how that’s going to affect employment. How are you viewing that as a commission, and what are the most important issues that we need to start thinking about in that domain as it relates to the EEOC’s mandate?
Sonderling: So you can have the conversation about ChatGPT and generative AI and all the stats out there, how it’s going to displace workers at all levels within an organization—from those with advanced degrees to those who are entry level, making minimum wage—and how companies are going to have to adapt, not only implementing these technologies, but also training, reskilling, upskilling the existing workers, finding new positions, because this is also going to create a lot of new jobs. And how, when you’re making those workforce changes—whether you’re going through automation or using ChatGPT to do some of these longstanding functions—how are you going to ensure that those workers who are displaced, those workers who are going to be moving into other jobs or getting reskilling and upskilling opportunities, that there’s not going to be discrimination there, that the breakdown of those impacts are not going to affect certain national origins, races.
Fuller: So did the commission get interested in it because you were reading The New York Times and The Wall Street Journal, because you were beginning to see instances in which you thought artificial intelligence was driving outcomes that violated the law, because there were complaints suggesting that?
Sonderling: I started getting involved in artificial intelligence in the workplace not long after I was confirmed, because when I got this position, I’m not only wanting to make an impact, I really talked to a lot of people out there—CHROs, general counsels, and other people in this industry. And I asked, “What are the top issues?” aside from, obviously, Covid at the time, we were dealing with vaccination requirements, that was obviously, thankfully, going to pass. But what are the long-term issues affecting human resources and human capital management? And I heard artificial intelligence. And I didn’t really understand what that was at the time and the impact it was having on HR already. A lot of the news reports, a lot of the articles at the time were all discussing robots, especially in logistics and manufacturing. And that’s what most people thought at the time that was the big issue. And I really dove into it, it’s not. The big issue is machine learning and artificial intelligence replacing HR professionals, performing the tasks that were completely done by humans since the Industrial Revolution. And that is a completely different conversation than generative AI, than ChatGPT. So that has been my focus, because when I dove into it, I really realized that these products are already being used significantly for companies, large and small, making actual employment decisions. So that’s where I wanted to focus our efforts in. Because for each use of AI in HR, it completely fits within the EEOC’s mandate.
Fuller: Well, that’s something I think many people may not be conscious of—that AI Professor Minsky at MIT used to describe it as a “suitcase term”—you can jam anything you want in there, close it up, call it AI, and no one can see what’s in it. And that AI was being pretty widely used, as you mentioned, before Covid, and things like applicant tracking systems were beginning to show up in performance evaluation, applicant interviewing processes, and really suffusing the process of candid identification, selection, and hiring. And, of course, now with generative AI, it has become much more apparent to people how powerful these technologies are. When you talk to employers about this, what are you hearing from them about their use cases, the benefits they’re seeking? And what kind of questions are they asking you for guidance?
Sonderling: There’s AI right now that will do performance management. There’s AI that will even tell employees that they’re fired if they don’t hit their goal. So it’s being used A to Z of the employment relationship. And from the EEOC’s perspective, we regulate each and every single—not only use of the software—but each and every employment decision that employers make regarding the employment life cycle. So you could see where the rush is to the EEOCs from general counsels, from labor and employment lawyers and from CHROs, how they should be using this.
Fuller: So what type of guidance are companies seeking from you? Because this is a novel technology. And even the idea that we would regulate AI, that’s an ongoing debate on Capitol Hill. So what kind of guidance are they seeking? What are they concerned about? How are you guiding them so they don’t get themselves into trouble inadvertently?
Sonderling: Most companies realize that they have to use AI in HR to stay competitive. Because not only their competitors are going to use it, but with just the changing dynamics of the workforce, with the amount of applicants, with the amount of shifting of employees between jobs, with what employees are now requiring from their employers to stay there, the question is no longer, “Am I going to use this software in HR?” The question I am posing and the question people come to me is, “You know you have to use it, so how are you going to use it, for what purpose? And how are you going to comply with longstanding anti-discrimination law?” Because when you’re dealing with AI in HR, you’re dealing with fundamental civil rights, which is the ability to thrive, enter in the workforce without being discriminated against. And the reason I bring that up is because a lot of the vendors in this space are selling into talent acquisition, they’re selling into management that is looking to diversify their workforce. They’re starting out coming that we have a problem in HR, in hiring, and that’s bias, and that’s the humans. So how do we eliminate the human from this entire equation? Because we know all the statistics of how there’s bias in hiring, whether you’re a woman, versus a man, African American, Asian American, versus somebody who’s not. So that’s where companies are looking to solve that problem, like they do in Silicon Valley, especially with artificial intelligence. And here, if we can eliminate the human from this and actually use metrics on talent, on skills, on their prior job history, that you can actually use—as opposed to what does the person look like? What religion are they?—all these factors that are unlawful to make a hiring decision. And that’s really how these products are being sold into the companies. And that may sound great, because a lot of these products promise diversity, equity, inclusion by removing the human from that decision making. But there’s a lot of issues to unpack there, because, like everyone knows at this point, AI is just replicating decisions—whether it’s coming from the dataset or if it’s somebody going in manually and looking for it to pick up certain characteristics. And I think it’s really important to break down what AI means in HR. How do we translate it into language that HR professionals and lawyers understand? So when you talk about data sets when you come in HR, all we’re looking at the data set is either your applicant pool or your current workforce. So let’s demystify what that means. And it’s just as simple as anything else. If your applicants are made up of one gender, race, national origin, and you put it through the machine learning, it may think that that dominant race in there is what they should be looking for, and it may unnecessarily unlawfully exclude people who are not in that race. So then you could have discrimination. And that’s where you get into the theory of disparate impact discrimination.
Fuller: So it sounds, as these technologies being applied, that there are some real risks, particularly in the training data that’s used, that the type of bias that expressed itself in previous processes of both hiring people and advancing people, will just get ported into AI. But it also sounds as if your sense is that the AI could be used very productively to help assuage some of those concerns. So that it’s a bit of a balance, how do we prevent replicating previous error while using these new tools to help overcome some of the causes of that previous error?
Sonderling: For each potential harm of AI, there’s a potential benefit as well, to help us move forward and help the EEOC. And part of our mission is to advance equal employment opportunity for all in the workforce. And I’m a believer that AI can actually help us get there, because whether it’s ensuring that we’re getting the right people in the applicant pool, we’re actually allowing employers to take that skills-based approach, the talent-based approach, using artificial intelligence to find out what those skills actually are and not just rely on the status quo, because so many of the hiring processes traditionally—of how you’re promoted at work—it hasn’t been very transparent to begin with. And what we call “tap-of-the-shoulder” recruiting policies—and the EEOC has long warned against these policies—it’s just saying, “Well, you’re my friend, I work with you all the time, apply for this job before it opens or apply now.” And that’s generally how there’s been issues with promoting and people not getting into the higher levels within an organization because of those longstanding policies. But using AI to do this, it actually will allow the employer in a very transparent way to say, “Well, here are the skills required for this promotion. Here is who has those skills within our workforce. And here’s who should apply and get a fair shake.” Where before, those opportunities weren’t even existing because of the longstanding employment policies.
Fuller: You’ve taken a very interesting public stance that we ought to proceed with some limited forms of government regulation, augmented by a very strong collaboration between government and industry. Could you elaborate on that, and, particularly, the notion of a more collaborative relationship, what would be elements of that? What would define success in that model?
Sonderling: For me, in the executive branch. we can enforce the laws that have been on the books, and that’s what our job is. And whether or not we need new regulation, we need new laws, we need a new AI commission, is the hottest topic in Washington, D.C., and across the world. Understanding our limitations—whether it’s budget, whether it’s investigating, whether it’s skills—within the EEOC, I believe that the best thing we can be doing right now is ensuring that companies who are using these products are working with their vendors to institute their own self-governance. And there’s a lot of research, there’s a lot of work out there, whether it’s AI vendors, whether it’s the big-tech companies like Microsoft, Meta, Google, Workday, Salesforce, they’re all putting out their own policies and procedures related to using AI ethically and lawfully. So understanding that a lot of companies who want to use AI across the board may not have those internal resources. They may not have PhDs in ethics, they may not have PhDs in machine learning. That there’s a lot you can use, existing right now—policies, procedures, mission statements—that are out there for free on the internet. And I think that’s where I’m trying to shift the conversation is that the law is the law. Title VII, which is the law we enforce, applies to all employment decisions, whether it’s made by a human or robot or anyone else. So knowing that existing framework exists, how can now we work with the people who are developing and deploying these products to institute a whole corporate governance and ethics system in place with the existing laws, so we can use these, not blindly, but at least for a mission purpose? And I think that’s where a lot of the distraction is now, with how do we regulate AI. The EEOC is never going to be able to regulate technology. We know how to regulate employment decisions.
Fuller: It’s interesting, if we do look at the history of regulation, even going back to the 1980s, there were instances in which agencies—I’m thinking specifically the Federal Communications Commission—actually stepped back from some of its mandate. Historically, the FCC had to approve devices to be assigned open frequencies in the radio spectrum. And they just said, “We can’t keep up with the technology.” That unleashed an absolute boom in American telecommunications technology. But are there, nonetheless, best practices that we can start sharing with companies? You alluded them a little bit to the tech providers. And also, where does the ultimate onus lay here? These technologies are so complicated and so advanced and changing so fast and learning so fast. Is it incumbent on the Microsofts and on the Googles to ensure their tools aren’t vulnerable to this? Or is it incumbent upon the corporate user that’s piece-parting these technologies together to make sure that the stack, the tech stack they create, doesn’t create the types of outcomes that are going to attract your attention for a good purpose?
Sonderling: So this is a really important concept here about the ecosystem related to AI in HR. And it’s really a big issue now in the sense, where the EEOC, prior to technology, prior to artificial intelligence, really knows its world very well. So we at the EEOC have jurisdiction over employers, unions, and staffing agencies. And obviously, of course, we’re protecting employees from discrimination. But now you’re having software engineers, vendors come into the equation who are developing software to now make these employment decisions for employers. And they don’t know labor and employment law, they don’t know HR, nor should they, because they need to know how to actually make the algorithms and the computer coding. So in a sense, our scope is much broader now to everyone involved in this equation. So how do we teach these computer software engineers, these entrepreneurs longstanding anti-discrimination law? Because we know they do not want to make a product that is going to discriminate against civil rights. Not only would nobody buy that product, it’s horrible publicity for everyone involved—from those who funded it, from the VCs funding these, to those who are developing them and then to the companies using them, and then, obviously, the employees at the end of the day who are subject to bias because of this tool. So you see, it’s a much broader ecosystem world that we have to deal with now, and everyone speaks a different language. And I think that’s the tricky part here is, how do we educate everyone broadly in that sense?
Fuller: And, Keith, what about best practices? What do you think is the state-of-the-art out there?
Sonderling: I think that, with best practices, this is knowing that the employer, under employment law, is 100 percent liable from the employment decision. This is unlike other areas of law. In our case, the vendor does not have liability. So the vendors can show how their product works, how it could advance diversity, equity, inclusion, how it can make hiring more efficient, how it can make performance management more efficient. But it’s on the employer to be able to push back in saying, “Okay, well, how is that going to work at my company for this specific use?” Because when the EEOC does an investigation, we’re not going to be looking broadly about how generic data sets going through an employer’s AI vendor system resulted in. What we’re looking at is how does it actually apply to this one job, to this one job description in this location in the United States looking at that workforce. And that’s the really tricky equation here, because every line in a job description is fairly protected. And the employer has to prove each and every requirement in there is related to the job. The company needs to work with that vendor in saying, “How are you going to test this on my system? How are you going to ensure that, if I use it for this job in this location, that the testing is going to comply with federal anti-discrimination law? And then how are you going to help me retest it as the position changes?” And I think another big part for best practices is that employers can demand from these vendors that they properly train their individual employees at that company how to use these programs. A bad actor within your company can go in and say, “Well, I want to exclude all people of this race. I want to exclude all women from the workforce.” So all of that frontend work is out the window.
Fuller: That would be a genuinely bad actor.
Sonderling: You could see why it’s important that only certain individuals within the company, once you bought it, are trained by the vendor, understand the fair use of this, and it’s limited to that pool of individuals who you trust and have been trained, and that the constant monitoring of who’s using it to make sure that the decisions are not being made with bias or unlawful. And then, throughout the lifecycle of the product, building that corporate governance around policies and fair use. This is something companies can be doing right now as a best practice as a way to mitigate damage. Not just saying that, “Okay, we’ve adopted these policies and all of our AI and machine learning is going to be in accordance with the law and ethical principles.” Which is great, companies should be doing. But building handbooks, policies, procedures around the use of it. When we saw the spike in the claims related to sexual harassment after the #MeToo movement was front-page national news, what we saw is companies doubled down on sexual harassment policies. And we saw a decrease in sexual harassment claims after that. It took a very bad event to happen. And that’s what I’m trying to prevent with AI discrimination now. If you have those policies and procedures in place, if somebody wants to use it for the wrong purpose to inject bias, they can be dealt with quickly, and it’s not going to affect the whole organization.
Fuller: Certainly, our research has yielded also a disconnect between what senior managers think is happening in the company—largely, because they know there are policies and goals that have been set—versus what’s happening on the shop floor, in things like hiring, providing things as simple as regular, actionable, clear feedback to people. So the need to try to get companies to be aware that there’s a difference between intent and execution, I think, is borne out in multiple dimensions. Let’s broaden the scope a little bit here, Keith, because, of course, you’re responsible for enforcement of employment law in the United States. But most of the companies at the cutting edge of using these tools are large companies, they’re global employers. And we have major regulatory bodies in places like Brussels at the European Union that are working on this. Is the EEOC collaborating with them? What’s the global state of play, the state of the dialogue, and are we in sync with what other big trading blocs are doing?
Sonderling: In the EU, they’re taking the lead, very much like they did with GDPR, in trying to set a global standard. As you correctly pointed out, this software, especially for multinational corporations who can afford to purchase or develop it internally, it’s certainly going to impact the entire world. And when it comes to employment law and civil rights in the workplace, generally, government bodies across the world look to the EEOC because of how impactful our laws are, and also knowing these multinational corporations do not want to discriminate and want to follow the United States lead in this. So what we’re seeing in Europe with the EU AI Act is a much different approach than we’ve seen in the United States, related to not only regulating technology, but putting use cases of a product into certain categories. And there, they have their different categories of risk—from low risk to unacceptable risk. And they said the use of AI in employment is going to be in the high-risk category, which is going to subject it to disclosures, robust audits, and other penalties as well. And that’s just a different state of affairs than the United States, where you’re free to use any technology, any product. But if you going to violate the law, there’re going to be consequences. I believe that employment decisions, even in the United States, carry a lot of risk with them because of discrimination and because of my agency. So knowing that, outside of the government saying that these are high-risk categories, I think a lot of employers already understand that and understand, if you’re going to be using tools that affect somebody’s livelihood—versus using a tool that’s going to make a delivery faster, that’s going to look at accounting spreadsheets—essentially putting someone out of a job based upon a protected characteristic, there’s a lot you can be doing in advance. So the EU’s pre-deployment requirement, which is also what we’re seeing in New York City, you don’t need to wait for a city like New York to say, “Do a pre-deployment audit.” You don’t need to wait for the EU’s AI Act to pass that requires you to do that. There’s nothing preventing you from doing that now, using these long-standing structures that have been in place since the 1960s—a lot of employment testing from the EEOC since the 1970s, which industrial and organizational psychologists have been trained on and still use—corporations know that. And corporations are doing a lot of risk-based audits for other areas of the business. If they want to use these tools, there’s nothing preventing them from using the existing framework that Europe or New York are going to require and do those pre-deployment audits using the longstanding EEOC guidance before you ever let it make a decision on someone’s livelihood. And if it doesn’t work, you can adjust the models, you can address the skills before being deployed, before having that liability. And I think that’s just a mindset shift. But at the end of the day, the amount of potential savings from liability, from discriminating, using AI in a transparent way, we didn’t have those tools before to be able to, in real time, adjust what the skills are.
Fuller: It’s interesting. I did a paper with some colleagues at Deloitte on how boards and senior management teams think about talent hiring and these issues, as an issue of risk management. And it was rather surprising the degree to which large companies don’t extend their formal risk management processes into these domains. It’ll be interesting to see if the advent of AI—with everything you’ve discussed and described to us today—and the unfamiliarity of it to most management teams causes that to change. That the companies will recognize now that this is a very impactful technology, but that it could lead to all sorts of outcomes that they have to anticipate and plan for through a risk-management lens.
Sonderling: I think the scale of the decision-making using AI in HR and machine learning is going to help push that conversation. There’s a statistic: For somebody in talent acquisition looking at a resume, we’ll say a paper resume, it takes around seven seconds to look at a resume, look it over. So if somebody wants to discriminate in that sense, they have to look. So, say if they want to exclude all females from the workforce, they have to look for female sounding names, women’s colleges, whatever indicators, and throw it in the trash, that physically takes time. Or delete the resume. With AI, in 0.7 seconds you could do that with millions of resumes. So I think, from the corporate level, from the board level, knowing there’s that risk of the scale, will push this conversation saying, “Okay, well, our liability now, instead of somebody who took 30 minutes and threw out 30 resumes in the trash, versus hundreds of thousands of individuals qualified for the job who are being eliminated, those are all potential lawsuits that we need to now implement this, and we need to give the resources.” And that’s the bigger issue, is the resources. Because with SaaS software, other AI software being sold into businesses, you let it go. You turn it on, you install it, and it saves you money. It eliminates employees, it makes decisions faster. But here, when you’re dealing with civil rights, when you’re dealing with employment decisions, that set-and-forget-it approach just cannot work. And so you’re asking the companies now to spend a lot of money investing in the infrastructure for this technology. And now you’re saying, “Well, now we need to add more onto that, because we need to test it, we need to do audits, we need a governance, we need a fair-use policy, we need training, we need to continue to work with the vendors, and now have all these different testings under all these different testing models. That’s just a lot more work, it’s a lot more money.” But I argue, for these products to properly work, that has to happen, and that structure has to be built around it. And it’s just a mindset shift from looking at how we implement technologies in larger corporations.
Fuller: Keith, when I’m discussing the adoption of AI with executives in big companies, one of the things I do hear is a real concern about the implications of having much more granular, specific data that they own, that they hold, it’s discoverable by a body like the EEOC, by plaintiffs’ attorneys, and that very often there’s a tension between the human resources department and even the chief technology officers and the chief counsels. You’re saying, “Wait a second. Wait a second. You’re playing with fire here.” Is that a legitimate concern?
Sonderling: That has been the concern that I’ve faced as well. And I’m trying to change the dynamics on this, because I do believe this point that you just mentioned is probably the biggest issue in large-scale implementation of some of these products that can truly help not only diversify the workforce but allow employees to find their best jobs within an organization. So let me break this down. Let’s talk about how the EEOC investigates now. Somebody complains of employment discrimination. And we show up. And how do we figure out employment discrimination? We go to the HR directors, we go to the business leaders, and we say, “Did you discriminate against this applicant pool based upon gender? Because that’s what the results show.” And what do we get? In depositions, people say, “No, of course not. We would never do anything like that.” And that’s what we have to deal with now. So when you talk about the black box of algorithm and decision breaking, the EEOC has been left with the black box of somebody’s brain. And whether or not they made an employment decision lawfully or unlawfully, we have to figure out, and nobody is ever going to admit they have bias. Nobody is going to admit directly …
Fuller: … may not be aware of it.
Sonderling: Or may not even be aware of it. Okay, so that’s what we’re left with now. Or how are employment decisions made? During an interview, you’re sitting there, you’re scribbling notes. And why you hired this individual, maybe they’re your friend, maybe you go to the same religious institution. Again, those are unlawful factors being accounted in, and we don’t really know and have to backtrack that again there based upon somebody’s scribbled notes they may or not have during an interview. So here’s where I’m trying to change the dynamics of implementation—from being a lawyer here, from a defense standpoint as well. So if you’re using AI to make employment decisions, to assist you making with employment decisions, there’s a lot of clicks in there. And there’s a lot of ways that you can use the AI to show, “Here are the skills we were looking for. Here’s the job, here’s the job description, here’s the applicant pool, and here’s what we use the machine learning on neutral characteristics that are related to the job, and here’s the pool that came out of it. Okay? And here’s why it was a business necessity of why we put these factors in to take that skills-based approach.” And you have the record of that. Or, to the examples we talked about earlier, in the importance of having these policies in there, having those policies, procedures in place, having those fair use policies, and immediately terminate that individual, that bad actor with bias, limiting the liability. So I really think that this can really help not only with the EEOC’s investigations, but help corporations have a much more transparent, documented way of how employment decisions are made, which has been lacking for a very long time. And also from an employee fairness perspective, if they know those qualifications that are going into the job, as opposed to, “Well, I just didn’t get it because they didn’t like me,” or, “I didn’t get it because I’m this color,” versus, “No, this is the very transparent way we made the employment decisions.” It makes our job at the government easier for these investigations. It potentially deters some of the large-class-action lawyers. Because, look, for a federal government investigation or a large-scale class-action lawsuit, it’s going to be much harder to prove that the company was a bad actor or is using this program to discriminate when you have those policies from the top. You have the training, and you have clear ways that you use the AI to make the employment decisions based on skills, versus another company who just bought the program, instituted it like anything else, didn’t have all that. Where are we going to spend our time? And I think that’s something that corporations need to consider as they not only invest in the product but invest about this whole governance and infrastructure around it.
Fuller: It’s all, I think, very exciting and really does get to some of the positive attributes AI could have, in terms of reducing the matching problems between people’s actual backgrounds and their likelihood of success in a job, to helping companies gain more traction in their diversity efforts. Keith, we focused our conversation on AI, which is endlessly interesting, and it’s going to be very, very provocative to see how it all unfolds in the corporate world. Are there other major issues that the EEOC is focused on and you think that our listeners will be interested to hear about over the course of your term in the administration?
Sonderling: I think the continued interest from Washington, D.C., in pay is going to be top of the mind for not only the federal government but for employers nationwide.
Fuller: In pay levels? In how hours are recognized? In contractor versus full time?
Sonderling: I break it down in three buckets. First, it’s pay transparency. And what you’re seeing is how do we root out pay inequity, which has been illegal since 1963, before the EEOC existed. And states and cities are now putting in laws that require employers to disclose the pay for certain positions. So we’re saying, “Here’s the range, and we’re requiring you, private employers, to publish it, to prevent discrimination saying, ‘Here’s the qualification, the low scale, the high scale.’” The federal government does not have that, so you’re seeing a lot of states being pushed into that. And what that’s causing when states like California, New York, Colorado do it, national employers are saying, “We’re just going to do it everywhere.” So I think that’s a huge topic when you talk about pay. The other one is about pay equity. And with the U.S. Women’s Soccer team, a case the EEOC was involved in, very much like the #MeToo movement, it pushes things to the front page of the newspaper. I think because of the interest in Washington, D.C., whether it’s through pay transparency or pay data collection, we’re really going to see an increase of how do we close the pay gap, not just on men and women, but also on national origins, other socioeconomic status, which is illegal to discriminate based on pay. And then, the other regulatory issue is, how do we get there? And if the federal government collecting private employer’s payroll information, will that help? And I think that’s the conversation you’ll be seeing toward the end of 2023, into 2024, is what is the federal government’s role in overseeing payroll records and having to disclose to the government the different pay bands between race, ethnicity, sex, are categories that we collect data from employers anyway on the socioeconomic breakdown? And there’s a big push now for the EEOC to collect that data from employers. And knowing that likely is going to happen in one way or the other, I’ve really been talking to CHROs and general counsels and saying, “Now’s your time to get your pay in order.” Look at those pay gaps, whether it’s men or women, whether it’s race, national origin, disability, religion, LGBT, all of our protected categories, to the extent of course you have that information. And now is the time to resolve that and see if it’s really pay inequity or is there are lawful reasons why there’s a pay discrepancy like seniority programs, commissions, et cetera. But before you ever have to disclose your payroll information to the federal government, you can get your house in order now, because I’m telling you, from Washington, D.C., with the dynamics of the White House, with the dynamics of the Senate, the dynamics of the EEOC, there’s just a huge interest in tackling pay.
Fuller: Well, Keith Sonderling, Commissioner at the U.S. federal government’s Equal Employment Opportunity Commission, we’ll be looking forward to developments and watching the role of the EEOC in regulating these technologies as these roll out, and in important issues like pay and discrimination. Thanks for joining us.
Sonderling: Thanks for having me.
Fuller: We hope you enjoy the Managing the Future of Work podcast. If you haven’t already, please subscribe and rate the show wherever you get your podcasts. You can find out more about the Managing the Future of Work Project at our website hbs.edu/managingthefutureofwork. While you’re there, sign up for our newsletter.