BIGS Actionable Intelligence: Companies are embracing artificial intelligence systems that can perpetuate racial and cultural bias, which can increase business risk and decrease sales. The solution, according to new research out of Harvard Business School, is to ensure that the system developers, data workers and others who create these systems reflect the real-world diversity in the marketplace.

BOSTON — Imagine a parent shopping online for a quinceañera dress for their daughter to wear as she celebrates her 15th birthday and the symbolic passage into adulthood. Would a search site powered by artificial intelligence (AI) list the right product? Would it know what a quinceañera dress is?

Marketing professor Dr. Broderick Turner argues in a new research paper that the search site is more likely to call the garment a prom dress or say that it doesn’t exist, for one simple reason: the people who programmed that AI are statistically unlikely to be Latino. Hispanics and Latinos make up only about 11 percent of AI specialists, while 67 percent are white, according to the job site Zippia.

The scenario highlights a new dimension in the conversation about corporate diversity as companies embrace AI and similar technologies to streamline processes and services. Unless those digital systems reflect real-world diversity, Turner told The BiGS Fix, they risk perpetuating bias based on race, ethnicity, culture and other factors—and that can be costly for businesses.

Currently a fellow at Harvard Business School’s Institute for the Study of Business in Global Society (BiGS), Turner studies the intersection of bias and technology. Guarding against bias in technology, he said, requires that tech companies make sure that the workforce responsible for creating these systems—from the developers who write code to the data workers who classify information—is diverse.

“I'm thinking about categorization and classification of data by the people who will say that a quinceañera dress is a quinceañera dress,” said Turner, who is also the co-founder of the Technology, Race and Prejudice Lab (TRAP Lab) and an assistant professor of marketing at Virginia Tech. “The question is always, who are the humans who are developing the software? Who are the humans who are classifying the data? And ultimately, who are the humans who are going to use the AI systems?”

‘Reputational damage and backlash’

Artificial intelligence is becoming more common in everyday life as machine learning and “generative-pretrained transformer” systems such as OpenAI’s ChatGPT, Anthropic’s Claude, Microsoft’s Bing and Google’s Bard gain wider acceptance. Much has been written about how bias in AI technology can negatively impact marginalized groups in areas such as employment, housing, credit and criminal justice. But the impact in a corporate setting is not often discussed.

Dr. Karim Ginena, a former senior AI user research at Meta who founded the AI governance consulting firm RAI Audit, said companies must be aware that implicit bias in an AI system can perpetuate harmful stereotypes and reinforce systemic discrimination. Perhaps more important, the system can do so with great efficiency.

“As a human being, if you were to discriminate, your points of contact are limited compared with an AI machine that is able to make decisions in fractions of a second,” said Ginena, who focuses on issues of ethics and governance in industry and academia. “That is a major problem. Businesses care about their reputation. So, if a business deploys an AI product with bad implicit bias, they can face reputational damage and backlash.”

Also important is that bias can cause companies to miss opportunities. For example, most AI systems are established to use English as their language, even though nearly half a billion people worldwide speak Spanish. The situation is more complicated in the United States, where in many households, the two languages are intertwined into what is commonly known as “Spanglish.” An AI system that does not accommodate immigrant populations and properly interpret how they speak may fail to capture business.

Turner said, “It sounds crazy to me … to leave money on the table.”

Critical questions for executives

At the TRAP Lab, Turner and other researchers examine the race-based implicit bias found in AI and work to demystify the intricacies of the algorithms found in AI systems. The lab is developing frameworks to guide adoption of more equitable technology.

“Don't let these companies lie to you about how complicated these systems are,” Turner said. “They can be broken down to their most basic component parts, which is something you learned in high school math.”

When companies develop AI platforms or incorporate them into other systems, Turner encourages CEOs and other executives to ask three questions:

  • What does the product do?
  • Who does it empower?
  • And, how will it be used daily?

The questions can help them make decisions that will better ensure that AI systems don’t disempower certain populations or simply result in poor products that some demographics don’t trust or can’t use, he said.

“When I talk to business leaders about this, I try to speak in their language, which is to make money,” Turner said. “If you make a bad product that does not work for a huge swath of people, you will lose money. You may sell it the first time, but once it starts to fail a huge swath of humans, you're going to lose money.”

Focus on the Hispanic population

A large part of the solution is to diversify the developers and data workers who create AI systems and classify information to properly reflect the marketplace. However, Ginena said that itself may be a challenge due to demographics. In 2022, the Latino population in the United States reached almost 64 million, or almost one in every five people, according to a Pew Research report. That population is growing fast, having jumped by 26 percent since 2010—and it “is not monolothic,” he said.

While roughly 60 percent of Latinos in the United States are of Mexican origin, according to Pew, millions also hail from Central and South American as well as Caribbean, each of which have a distinct history and culture. Roughly 27 million U.S. Latinos also identify with more than one race. Even geography plays a role, with a large portion of U.S. Latinos concentrated in Western states like California and Texas, where Latinos make up 40 percent of the population.

At the same time, the number of Hispanic-owned businesses stands at about 5 million, contributing about $800 billion to the American economy each year, according to the U.S. Small Business Administration. Dr. Tessa Garcia-Collart, an assistant professor of marketing at the University of Missouri-St Louis, says that means there is room to grow.

“Hispanic and Latinos are still underrepresented in all areas of business and higher education,” she said. “On the one hand, this presents an opportunity for businesses to increase the relevance of their product offering to this target market. And on the other hand, this presents a need to increase the participation of Hispanics and Latin professionals in artificial intelligence, data science and machine learning to ensure that these innovations are multicultural. Increasing the multiculturalism of AI in business is an absolute necessity for both business owners and consumers alike.”