Web Accessibility Support

Your Computer Might Be Racist

AI is quickly taking over, but how it makes decisions can be based on fundamental biased learning.

by Kimberly Holmes-Iverson
facial recognition art

Your computer is watching you, but chances are if you’re a person of color, researchers say it could also be making up a stereotypical storyline to accompany the view. 

Every major technology company uses machine learning models to perform better. From the suggestions your shopping app makes while you search for your next pair of jeans to the movies your streaming service says you should watch next, all of the platforms we rely on daily are fueled by recommendation algorithms. A programmer “teaches” the machine by feeding it data. Every time you confirm you are a human and not a robot while doing business online, you’re helping to train the artificial intelligence. However, according to experts, these tools are making decisions based on fundamentally biased data, which can adversely affect a person’s civil rights and even lower the number of opportunities presented. 

“Data plays a very big role in artificial intelligence and machine learning,” says Danda Rawat, PhD, Howard University engineering associate dean for research, professor, and director of Howard’s Data Science and Cybersecurity Center

The 2018 groundbreaking paper, Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, detailed the rampant bias currently displayed by machine learning. Researchers Joy Buolamwini from MIT and Timnit Gebru from Microsoft examined a data set of 1,000 images of faces of those from three African countries and three European countries, putting IBM, Microsoft, and Face++ to the test. Each performed its worst on darker-skinned females, meaning the programs repeatedly could not “understand” or see the images of women or men of color, while it could quickly decipher those of white men and women. 

Facial recognition
Research has shown that artificial intelligence has more difficulty recognizing darker-skinned people.

It’s an action born from the lack of diversity in data used to train the machines and could translate into bias, affecting who is hired, who is accused, and who receives access to quality medical care. For example, researchers say if the machine reviews a name or face and assigns a negative connotation to that person or resume, it could cause a series of negative reactions, affecting that particular person who doesn’t get the job and now can’t afford to feed his or her family.  

Artificial Intelligence is Not Intelligent

“When you say AI, artificial intelligence, youre implying that this construct has intelligence, you’re assigning a human quality to this machine and these machines are not human,” cautions Dhanaraj Thakur, PhD, an instructor in the Department of Communication, Culture and Media Studies at Howard University. “They cant empathize. They cant deal with ethical issues or moral issues, so that implies questions around justice and fairness.”

Thakur is also research director at the Center for Democracy and Technology (CDT). His work has examined automated content moderation, data privacy, and gendered disinformation, among other tech policy issues. Thakur says research has shown that machine learning tools can be highly discriminatory and biased towards people of color in particular.  In one example, different facial recognition tools were shown to be less accurate when it came to classifying darker skinned women compared to other groups. In another example, when asked to complete sentences about Muslims, a machine learning model returned results that were often violent and linked to terrorism.

nurse scanning forehead with thermometer
AI has often misinterpreted darker-skinned people with innocuous behavior as dangerous. For example, a Black person using a thermal forehead thermometer on a patient may be interpreted as aiming a handgun instead.

“Let’s say it’s not intentional. Why is it happening?” questions Thakur. “Three reasons: these models are often developed by a like-minded group of programmers, usually white men. When you talk about machine learning developers, they are not diverse in the way the U.S. is diverse. Second is the data sources and the data quality. These datasets include millions and billions of data points. Theyre huge, yet they’re often still not representative of society in many ways. And then finally, most of the content on the web – and so most of the data used to train these models – is in English.”

They cant empathize. They cant deal with ethical issues or moral issues, so that implies questions around justice and fairness.”

His third point is the impetus for the current research focus of his research team – the use of large language models in non-English languages. As the team noted in a recent blog post: “Large language models are models trained on billions of words to try to predict what word will likely come next given a sequence of words (e.g., “After I exercise, I drink ____” → [(“water,” 74%), (“gatorade,” 22%), (“beer,” 0.6%)]).”  The team found that when it comes to the use of these models (like ChatGPT) the relative lack of data, software tools, and academic research in non-English languages can lead to serious problems for some populations. For instance, advocates have argued that Facebook’s failure to detect and remove inflammatory posts in Burmese, Amharic, and Assamese have promoted genocide and hatred against persecuted groups in Myanmar, Ethiopia, and India.

Instead of collecting new data that is wholly representative of society, the machine selects the answer that best fits the data set, but what if the selected collection of data is wrong or not large enough to accurately represent the world? 

Combatting racism in AI

Researchers at Howard are on the frontlines of fighting racism displayed in the artificial intelligence that big tech companies use and share. 

Howard made history earlier this year, announcing it will lead one of the Air Force’s 15 university-affiliated research centers, the first HBCU to do so. The five-year project includes forming a consortium of historically Black colleges with engineering and technology capabilities.

A UARC is a United States Department of Defense research center associated with a university. The cutting-edge institutions conduct basic, applied, and technology demonstration research. The partnership allows for a rich sharing of expertise and cost-effective resources. Rawat serves as the executive director of the UARC and principal investigator on the Howard contract. 

“If we don’t trust the system, it’s not usable by the people,” says Rawat. “We have to change that.”

presentation of UARC

Howard University President Wayne A. I. Frederick with U.S. Secretary of Defense Lloyd Austin III and U.S. Secretary of the Air Force Frank Kendall. Howard will serve as the 15th University Affiliated Research Center (UARC) and the first HBCU to lead a UARC. 

The $90 million contract will allow Howard’s researchers to work on building trust in human-machine teaming and continue to chip away at the problem of bias in machine-learning. 

“It would help if we had a more diverse set of people working on these issues,” says Thakur. “Thats a long-term issue because there are many kinds of structural barriers as to why more people of color aren’t in these fields and why they dont stay in these fields.”

The establishment of the Howard UARC enables the University to create a pipeline for students from elementary to post-graduate education. However, it’s not a simple path forward. 

Last year, Howard was selected to lead a $15 million data science training core of the National Institutes of Health (NIH)-backed consortium. The Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) program seeks to close diversity gaps in the artificial intelligence and machine learning fields. Work is centered on deciphering how algorithms are developed and trained and fighting the harmful biases that have emerged, which can lead to continued health disparities and inequities for underrepresented communities. 

It would help if we had a more diverse set of people working on these issues. Thats a long-term issue because there are many kinds of structural barriers as to why more people of color aren’t in these fields and why they dont stay in these fields.”

Howard computer scientist Legand Burge III, PhD, serves as the principal investigator of the data science training aspects of the initiative. His team is comprised of five faculty members and 24 undergraduate and graduate students. One task includes helping educate students at HBCUs and minority-serving institutions, thus increasing the number of potential workers of color in the field. 

“What we have noticed is that a lot of the institutions in the communities we’re trying to engage in, theres a heavy lift that needs to happen,” says Burge. “A lot of these places don't even have a computer science degree, so were finding that we’re going to have to build a foundation for many of the institutions to be able to participate. We call this ‘AI readiness level.’ For a lot of institutions, the AI readiness level is not there.”

K Through Gray: training a workforce to grow representation

A pivotal part of the plan includes increasing awareness and interest in STEM education amongst K-12 students.

“I say from ‘K through Gray,’” chuckles Burge. We’re trying to train up a workforce and increase the representation of underrepresented groups, from those participating in the data to those using AI/ML (artificial intelligence/machine learning) to solve health equity issues. There’s a lot of data of minorities, but it’s not being used in the AI that’s being developed right now.”

The team is dedicating this year to supporting the tribal community by working with 35 universities, helping them build their AI readiness. 

While machine learning tools can enhance the work of many, there are also fears the tool itself could erase jobs – especially those held by workers of underrepresented minorities.

“If somebody can do something that is repeated in pattern, I think those jobs could be replaced, but the loss will be compensated somewhere else,” says Rawat. “One analogy is when the internet and email systems were introduced in the 1990s, people thought USPS-type service providers would go out of business, right? But theres still USPS, UPS, and FedEx. Email did not replace those companies.” 

Burge agrees, adding, “There are even new jobs coming out around these large language models like ChatGBT; they call them prompt engineers. They’re paying well, and you don’t have to be a technical person.” 

But experts caution there is a need for technical oversight.

“Questions are being raised right now about recommendation algorithms within social media and how that impacts teenage girls and their mental health, but what about Black teenage girls?” Thakur asks. “That kind of question approach is missing at the policy level. And what is the incentive for a big company to address this question? Thats where you need the government to step in.”

Still, Thakur offers solace rooted in the passion of his researchers. 

“A lot of work has been done to address these questions, mostly by African American academics, researchers and activists in the Black community, and other communities of color,” says Thakur. 

Two scientists in suits standing in front of computer server

Testing Intelligence

Hackers Tricked Tesla Autopilot into Breaking Speed Limits. Howard Researchers Have Questions.

This story appears in the Spring/ Summer 2023 issue.
Article ID: 1371

More In...

Features