18 Feb 2022

Behind the Research: Bias in AI with Himabindu Lakkaraju, Edward McFowland III, and Seth Neel

ShareBar

by Shona Simkin

Himabindu "Hima" Lakkaraju, Edward McFowland III, and Seth Neel are new assistant professors in Technology and Operations Management at Harvard Business School. All three work in artificial intelligence and machine learning, exploring how these tools can help improve high-stakes decision making and examining bias and fairness. We caught up with them to ask about those thorny issues in data collection and modeling, why it’s important, and how it’s essential to the field of business.

What does each of you focus on, and how does your work intersect?
McFowland: Points of intersection for all of us is fairness and bias, and decision making in very challenging and complex situations. I often look at these issues through the lens of anomalous pattern detection. An anomalous pattern is when you observe a systematic pattern of data that is unexpected or undesirable. We want to see what these anomalies are, where they are, and why they exist, and then characterize them mathematically. I look at anomaly detection as a way to surface ideas, either things we don’t expect or do not want to see observed.

Lakkaraju: The overarching theme that touches all of our research is how analytics, data, and machine learning can help people make better decisions, in a more transparent, efficient, and fair manner.

I focus quite a bit on thinking about how models can be made more understandable to humans. For example, if we look at a doctor using machine learning models to determine what disease a patient has and what treatment to recommend, or a bank using models to determine who should and should not get a loan, these decisions rely heavily on models and data. Can a loan officer understand what factors the model is using to determine if someone gets a loan or not? As we put people and models together in real world applications more and more, how can we make these models more understandable to people so that they can determine if, when, and how much to trust these models and their predictions?

Neel: I study privacy-preserving machine learning using tools from a sub-field of computer science called differential privacy, and I’ve also worked extensively on fairness, which is one of the most debated terms. I think we all agree that any notion of algorithmic fairness has to be highly tailored to the context it’s being applied in. That involves engaging stakeholders and domain experts in the actual decision being made rather than leaving it up to the algorithm designer.

What are some examples of fairness and bias in AI and machine learning?
Lakkaraju: Let’s look at the context of bail, as that is one way that human bias can be introduced in the data. There is a lot of debate as to the role of machine learning and data in the criminal justice system–and to be clear, we don’t advocate for or against it, but we do audit and examine it. With bail, a judge decides who does and does not get bail. For those who were given bail you can also see the outcomes—did they go out and commit another crime or did they appear in court? That information is in the police records, but we can’t observe the outcomes for those who were never released. That means that the judge is controlling what information can ultimately be seen in the data, and is an example of how human bias can creep into the data.

How is this bias dangerous?
McFowland: These cycles become very, very important. We’re not classifying pictures of cats or dogs anymore; we’re literally deciding people’s fates and lives. If you get bias somewhere–anywhere–it can manifest and propagate in the system and turn even unbiased, fair processes into unfair and very biased processes. The ability to interrogate, understand, and explain them is very valuable, but it can be a very challenging and daunting task because often those who are building and those who are using the models are different people with different incentives and objectives.

How can that be guarded against?
McFowland: People often think that if they exclude certain features from the model, such as race or gender, then they will never capture spurious information. But we understand from multiple deep studies that many things are correlated with race and gender, so it may combine where you went to school—and there are schools that are predominantly women or people of color—with another piece of information and make a poor decision. We know that models will optimize whatever you tell them to optimize and if those features are ones you try to hide because you think they lead to bias, you can actually recreate them instead.

Lakkaraju: Fairness is not as simple as throwing a problematic feature away. Fairness is a lot more nuanced than thinking about eliminating fields from the data and assuming your model won’t be biased anymore. That is not the case because there are other correlates that can recreate those effects.

Neel: Conversely, not collecting those sensitive variables may make it more difficult to correct for bias. It’s really counter-intuitive—a lot of these notions of fairness rely crucially on the algorithm having access to exactly these sensitive attributes like race or age or sex that we may not want to bias the model.

Are there privacy concerns with collecting this sensitive data?
Neel: When you’re running all of these analyses on data and sharing outputs like recommendations or summary statistics, standard techniques can leak information about specific people in the dataset. One example is someone who volunteered to let their genetic material be used for a certain study because they have a rare condition. They expect that their participation in the study (which reveals their diseases status) will remain anonymous and secret, but it’s been shown that even with simple methods one can reverse engineer information in these big aggregate datasets and reconstruct information about a specific individual. Just as what Edward was saying about how one can still have biased data after removing sensitive information, the same is true with privacy. Removing obviously identifiable features doesn’t guarantee privacy, because almost any feature can be identifiable if it’s combined with the right side information.

The field of differential privacy builds algorithms that balance protecting the privacy of users in the dataset with standard notions of utility like accuracy. One direction that I’m interested in working on with Edward and Hima is studying the interactions between the different notions we all study— for example between the interpretability of the model and the inherent privacy risks of a model.

What does your work look like on a day to day basis?
Neel: All of these projects start with a potentially novel idea or perspective. The most exciting and intense part of the work is fleshing out the development of a new method, which can be at the white board with collaborators, or through hours spent mulling different approaches and prototyping simple use cases in code. Once concrete progress has been made it enters the stage of acquiring the data, coding up the algorithm, and running experiments to see how it compares to the state of the art.

Lakkaraju: That pretty much sums it up—the big picture is very interesting and important, but a lot of the day-to-day work involves looking into a lot of details—we talk with collaborators, figure out what it means to operationalize it, and then go do it. Our day-to-day work involves a lot of thinking on our own, writing up things, talking to students, coding, and generally a lot of meetings.

McFowland: I spend a lot of time daydreaming—a lot of deep thinking about a question and why I think it’s important and then trying to decompose that question into pieces. My problems often start with what I see and observe in my day, and then I ask if it’s a specific or a general problem and can anomaly detection (or some other tool in my toolbox) say something different about it? That’s when I go from note-taking to talking with other people and white boarding to think through different representations of it mathematically. I’m very big on chewing ideas over with other people–I think a lot of us are deeply collaborative and we try to really help each other think through problems critically.

Why is this work important for business schools?
McFowland: We all sit in machine learning with our expertise but are deeply collaborative in subjects outside of machine learning. I’m a big believer that this data shouldn’t be treated as some vector in some Euclidean space—these are actual lives and if I’m doing it right, what I’m doing could actually impact someone’s life in meaningful and consequential ways. How do I take a really important social problem and then frame it as something I know mathematically and not forgetting to translate the mathematically solution back to the original context (with all its complexities, nuances, and constraints)?

We know that machine learning gives companies the ability to scale at massive rates and allows them to be pioneers in certain areas. We see the impact–how it’s morphing in business and how it’s going to change how organizations frame themselves–how they operate and grow. We have the ability collectively to help frame and form how that will look in the next decade inside of organizations. That’s really important.

Lakkaraju: One of our common themes is using machine learnings and analytics for improving high-stakes decision making. A lot of the problems we focus on are the decisions that could potentially cause a huge loss to an organization if not done well—it could be financial loss, it could impact the health of someone, it could cause someone to go to jail, or affect employment opportunities. These are all high stakes scenarios that we don’t want to get wrong. There are some mistakes that we can all live with, like seeing an irrelevant friend suggestion on Facebook—we may not be happy to see an irrelevant friend suggestion, but we can move on. Whereas issues like not being able to be admitted to a college, not getting diagnosed properly, or decisions that cause billions of dollars to be lost are what we think about.

Neel: It is the high-stakes nature of these decisions that makes it so critical for future leaders in business and society to understand all of these underlying techniques. Ultimately, they’re going to be the ones taking responsibility for their use and deciding the scope and scale of their deployment. The mission statement for our scientific research is to make fundamental contributions to these techniques that make them more accessible, usable, and accurate, but as HBS faculty we also have the opportunity to educate future managers and leaders on how to think about these different areas and become pioneers inside of companies.

What do you like to do in your spare time?
Neel: I like to play squash and have a pretty serious Netflix addiction. Sometimes I can trick Edward into playing chess with me in my office.

McFowland: (laughing) No he can’t because he’s a phenomenal chess player! I thought I was good until I played with Seth! I like demanding sports–you can often find me at Shad playing basketball or lifting, and I’ve picked up tennis in the past few years. I also like learning foreign languages. I’ve been learning Italian, I spoke Spanish for a while, and I like to travel and use my languages.

Lakkaraju: To be very honest, I barely get any free time these days but when I do, I spend it on what I call “maintenance” hobbies such as working out or watching TV. In my past life, I used to pursue calligraphy and improv/standup as hobbies, but not anymore!

Read more about HBS faculty research in Working Knowledge. For updates on HBS faculty research, sign up for Working Knowledge’s weekly e-mail newsletter.

Post a Comment

Comments must be on-topic and civil in tone (with no name calling or personal attacks). Any promotional language or urls will be removed immediately. Your comment may be edited for clarity and length.