Web Accessibility Support

AI and HU

From researchers to musicians, faculty and students are adapting to a rapidly developing tech.

by Danny Flannery
Adobe stock image of AI brain

In June, Howard researchers marked a significant milestone in the journey to make Black voices heard. The researchers debuted resources they pioneered to make it easier for Blacks to use their voices to harness the power of artificial intelligence. AI-powered automatic speech recognition (ASR) tools — used in everything from transcribing to asking Apple’s Siri a question — have transformed how we communicate with each other and our devices. These tools, which rely on datasets that fail to include African American English (AAE), are prone to higher error rates for Black users speaking in their natural voices. For many Black users, these tools have been another example of how their lives and communities are excluded when discussing technological progress.

To help solve this problem, Howard and Google created Project Elevate Black Voices, a partnership that developed a dataset of Black regional dialects. Hosting events in cities across the country, research teams recorded 600 hours of AAE. The dataset, owned by Howard and currently open to researchers at HBCUs, can now be used to improve ASR technologies for Black speakers. The project was led by Gloria Washington, Ph.D., Lucretia Williams, Ph.D., and other researchers from Howard and Google.

“You shouldn’t have to code switch to use technology,” said Williams, senior researcher at Howard’s Human-Centered Artificial Intelligence Institute (HCAI), while giving a presentation on her lab’s work.

Lucretia Williams
Dr. Lucretia Williams is a leader of a pioneering effort to ensure that AI recognizes Black voices. Photo by courtesy Lucretia Williams.

This project is only one of the many AI-research initiatives underway at Howard. Through the work of researchers at every level — senior faculty to undergraduates — the university is becoming a hub for developing new knowledge about AI’s growing use, from improvements in medical imaging to the preservation of West African culture. Among these projects is the BRAVE IDEAS lab, run by Dr. Jaye Nias (also of the HCAI), which explores how AI can enhance how we learn about and preserve customs and languages. Nias’ recent work focuses on preserving and teaching others about the adinkra, symbols of the Asante people of Ghana and Cote d’Ivoire that serve as visual representations of proverbs and aphorisms.

“I started to wonder about proverbs as a wisdom source that could be used to train an AI model,” Nias explained. “I moved into this idea of the symbol, the proverb, and the wisdom that comes from it. We have the user put in a prompt, it finds a proverb, and then it parses a Generative Pre-trained Transformer (GPT) to provide some structured wisdom for that prompt that relates to that proverb.” With its ability to preserve local wisdom and introduce it to people outside of the Asante, this project reflects a path for AI that safekeeps and expands cultural knowledge. For Nias, this means centering humans at every step.

“This idea of rooting my research and my implementation of AI with cultural knowledge sources is really important to me,” she said. “When I first started in graduate school, I was interested in culturally relevant technologies and research within human-centered computing. I don’t like this idea of how technology is simply flattening the world. What we found is in the global South, many would lean into the ways of the global North in order to participate. But there is a lot of knowledge embedded in how other cultures engage and utilize their own indigenous technologies. As a result of this flattening, we are losing really important cultural artifacts, like mother tongue.”

Balancing Privacy and Community

While the work of Nias and Project Elevate Black Voices shows that AI can be used to aid and uplift, the potential to harm users and cultures is difficult to ignore. David F. Green, Ph.D., associate chair of writing in the Literature and Writing Department, worked on the Project Elevate Black Voices team to help researchers identify and categorize aspects of AAE dialects.

Green, who has a background in African American, cultural, hip-hop, and technological rhetoric, is a member of the MLA-CCCC Joint Task Force on Writing and AI. The task force was created to develop resources, guidelines, and professional standards on AI and writing. Through working papers, guidelines, and collections of teachers’ personal experiences, the task force is providing a framework for educators to teach ethical and responsible AI use.

One of Green’s chief concerns with AI is how itcan affect communication.

“It flattens language,” he explained. “If you’re only drawing on a limited set of authors, writers, and thinkers, it limits the possibilities and the capabilities of how expression occurs. And so, cultural influences disappear. The unique identity markers begin to disappear.”

David Green
Dr. David F. Green Jr. is tackling the intersection of writing and AI. Photo by Justin D. Knight.

This flattening is a major reason that research such as Project Elevate Black Voices is essential. Users, especially Black users and others with distinct cultural ways of speaking and expressing themselves, may not hear or see themselves reflected in the technology and may self-censor while using it, further ingraining biases.

This cycle can have real-world consequences, as documented in a 2024 study published in Nature titled “AI Generates Covertly Racist Decisions About People Based on Their Dialect.” The study found that current language models “embody covert racism in the form of dialect prejudice, exhibiting racial linguistic stereotypes about speakers of African American English that are more negative than any human stereotypes about African Americans ever experimentally recorded.” As new AI tools are proposed in everything from housing to hiring to criminal sentencing, there is a real risk that they will recreate or even exacerbate discrimination. Nias made a similar point while discussing a presentation by Pennsylvanian social workers on how AI is being used in the recommendation system for whether children should be removed from their homes.

“For years, these systems have been a part of our judicial system,” said Nias. “It’s been present; we just didn’t have a grounding in what it meant and what it could do, and I think now people are a little more aware of that power and how it can harm or help.”

As researchers find ways in which AI can build up communities, they inescapably require massive amounts of data, such as art, music, films, research papers, and social media posts. Harvesting this data presents major concerns related to privacy and ownership.

“There’s a mad dash to gain access to more and more data — and that really means more and more of people’s writings — to begin to use datasets to train the technology ,” said Green. “A lot of times people are not aware of the ways that their public or published writings become a part of these data sets because you’ve turned over your rights to that information. Those long and insightful Facebook posts or blog posts published to some of these companies’ websites can become fair game that can be used in ways you may not have intended, with a purpose you may not have intended as well.”

Environmental Demands of Digital Devices

AI’s impact isn’t simply limited to digital spaces. Companies like Amazon and Google require legions of data centers, all of which house thousands of servers that require the use of a lot of electricity and water, which is used to maintain cooling systems. According to research from the Massachusetts Institute of Technology, by 2026, the electricity consumption of these centers is expected to approach 1,050 terawatts, making them the fifth-largest electrical demand in the world.

“The data servers and the technologies themselves use massive amounts of energy to do the work that they continue to do for folks,” said Green. “The impact on the environment includes ongoing global warming concerns. The massive [amount of] heat and the amount of energy that’s produced and used is, in some ways, similar to a nuclear power plant in terms of just the amount of disruption it does.”

This disruption is not dispersed equally. Across the country, coal, natural gases, fossil fuels, and other “dirty energy” plants are disproportionately placed in economically impoverished, rural Black communities. In Colleton County, South Carolina, for example, a 2024 proposal would repurpose a previously shuttered coal plant into a natural gas plant, which has the potential to threaten the air quality of the predominantly Black residents living in the area. As reported in Capital B News, this is being done to generate power for the expanding data centers in the state. The power these data centers require may increase electricity bills of local residents, disproportionately impacting Black households, where electricity on average takes up a larger percentage of household budget, as noted by the ACEEE and the Department of Energy.

Like any digital technology, from laptops to the phones in our pockets, new tools reliant on AI require rare-earth elements , heavy metals that are essential to modern electronics manufacturing but require immense mining and processing of raw ore to make usable. Mining and processing these materials carries significant environmental risks, many of them shouldered by people in the Global South. For recent Howard graduate Becca Haynesworth, the technology is inextricably tied to colonization.

“When people make assumptions that AI just appears out of nowhere on our laptops, we have to really trace the footsteps of this piece of technology,” Haynesworth explained. “When you do, it’s rooted in material like cobalt, which is very abundant in the Democratic Republic of Congo.” Cobalt is a crucial rare-earth element used in semiconductors and lithium-ion batteries that power everything from phones to laptops to electric vehicles.

There is evidence that the Congolese cobalt industry has led to water pollution, forced displacement, and slavery, as reported by NPR, Amnesty International, and African Resources Watch. These human costs are not strictly new to AI’s growth; however, it does place it in a long history of how technological progress for some can mean exploitation — often unseen — for many.

Next-Gen Concerns

There has been no shortage of headlines on generative AI’s impact on education, painting a picture of college students readily replacing their own critical thinking with AI-generated work. While it would be dishonest to say there are no concerns about how technology is affecting young learners, among Howard’s students, the conversation is more nuanced.

“It might be unpopular, but I’m very anti-AI,” said rising junior Olivia Ocran, who is an English major with a minor in education. “At least in the writing world, I don’t see it having a place because of the way it’s designed. It’s taking what it finds on the internet and it’s regurgitating it out. It’s stealing the work and the years that people have put into their writing that they have found the courage to share.”

Ocran has seen the impacts of AI on younger students first-hand during observation hours in middle and high school classrooms. She worries that young students today are losing the skills that made her pursue writing and teaching in the first place, such as forming an argument, conducting research, and going to the library to ask for information.

In her education courses, Ocran heard conversations about how AI could be used as an educational tool. However, she remains unconvinced.

“We have created well-educated, successful people without AI,” she said. “I feel like because students have gotten used to abusing it, keeping it in the classroom is not going to be any help at all.”

Even amongst students who regularly engage with AI tools, there is an apprehension toward just how quickly the technology has changed our everyday lives.

Computer engineering sophomore Kamili Campbell works alongside Dr. Nias at the Human-Centered AI Institute. Campbell, an international student who’s been interested in coding since high school, credits experiencing the immense diversity of cultures growing up in Trinidad and Tobago and coming to the United States as the catalyst for her interest in the work of Nias’ Brave IDEAS Lab. In addition to her computer engineering work, Campbell has embraced generative AI in her organizational work as a co-president of the Howard chapter of the American Red Cross, where she uses it to help refine ideas for outreach events. For Campbell, AI is a brainstorming tool that requires an amount of personal responsibility.

Campbell also has long-term concerns about AI’s impact on her prospective career field, just as she’s beginning to enter it. Certain sectors, such as computer programming, are facing steep declines in jobs. The role of computer scientists — and the skills that are essential to being one — are also shifting.

“There are companies [that are] getting mid-level engineering scripts from AI technology like ALO with no human employees, just straight AI,” Campbell said. “It’s great because of the efficiency; they definitely decrease human error. As somebody who’s working hard in school to become a mid-level engineer, what do I do?”

Out of the Lab and Into the Classroom (and the Recording Studio, and the Office, and the ...)

For Howard professors and students alike, AI has quickly become a part of daily life, and how best to approach its entrance into the classroom is far from settled.

At the heart of the debate is anxiety over how to balance the reality of AI as an increasingly essential tool for students as they begin their professional lives. Some of the most exciting discussions and debates about AI’s role are happening in Howard’s Music Department.

Department Director Caroll Dashiell has been a longstanding fixture in the music industry and at Howard. A generational alum and the father of three Howard grads — including fellow music professor Christie Dashiell — Caroll’s music career has included playing jazz bass in multiple orchestras and with international performers, as well as recording, producing, and directing. Dashiell also has three decades of academic service at East Carolina University. He sees AI technology affecting all aspects of music and feels a duty to ensure his students are prepared for the future.

“It’s so important,” Dashiell said. “I always say to people who are against it — because we have people who are actually against it — you’ve been using it all the time. Have you ever used your GPS or asked Siri a question?”

As someone who must balance the creative and business aspects of his work, Dashiell has found uses for AI that enhance what he is able to do creatively, rather than stripping the human element away.

“As an artist, I don’t want AI to generate the bass,” he explained. “But as a producer, it always comes down to economics. I look at the orchestra and the strings when we’re doing shows. It used to be that it would be a full string section, but now from a financial standpoint, I can’t afford to pay for a full string section. I can pay for two string players, four at the most, then I’m using AI to double parts. In the performance hall when people are listening, they hear a full orchestra. It’s four people playing, but it’s enhanced and overdubbed and stacked.”

Dashiell understands the hesitancies of other faculty members and agrees there needs to be a greater understanding of where AI can be helpful in education and what its limitations are. To that end, he formed an exploratory committee with other music faculty members, whose attitudes toward AI broadly differ.

Angela Powell Walker, a member of the Music Department’s AI working group alongside professors Matthew Franke and Autumn McDonald, began using AI in her own life at the urging of her husband, a graphic designer who uses the technology on a daily basis. Describing herself as an AI newbie, Powell Walker now uses it for everything from conducting research to crafting quizzes.

“I love using it as a research tool because I can type in, ‘Can you name five composers from the impressionistic period that wrote songs about flowers?’” she said. “It will get that information out to you in like five seconds. And then you can go do the research accordingly on those composers.”

Powell Walker shares Dashiell’s belief that students should adapt to using the tool responsibly, rather than avoiding it altogether.

“Knowledge is power,” she added. “Students are going to find what makes life the easiest for them. Not just the students, everybody’s going to do that. If the tool’s there, I think it’s important that we don’t necessarily resist it but that we learn about it and how we can make it a successful, useful thing for the kids.”

Musicology professor Matthew Franke, Ph.D., is far more skeptical of its use, and has found generative AI tools to be far too unreliable.

“I try to avoid it, honestly, because I’ve just seen too many errors from students who are using it,” Franke said. “I routinely have to fail students because, say, I’ll ask ‘give me a bibliography on this topic,’ and they’ll [use] ChatGPT. Sixty percent of the bibliography are books and articles that don’t exist,.”

He cautions his students against relying on any generative AI outputs and said he’s concerned about the inherently unequal power dynamics between users and the tech companies that run these tools. Franke describes it as a catch-22, especially in relation to authenticity and plagiarism concerns.

“I think students are being told if they don’t jump on, they will be left behind, but if they do jump on board then they have become fake,” he said. “They lose authenticity, and this can also be used against them, so you might as well just be yourself, which is the last thing you can be if you’re using AI.”

This authenticity is especially important at a time when researchers are under intense, often politically motivated scrutiny. “I tell my students if you use AI, this is another thing that people who are hostile can point to and [say] ‘go look at this,’” he explained, concerned about how overuse or misuse of AI could come back to hurt students further along in their academic careers, comparing it to accusations of plagiarism.

On the other end of the spectrum, music and business professor Autumn McDonald has fully incorporated AI into her professional life, using it in everything from anthropological research to crafting presentations for her market research company, ADM Insights & Strategy. Her courses, such as marketing for the arts, prepare students for the realities of approaching the arts as a business. To her, it is unfair to ignore AI in the classroom when it is already shaping the professional world.

Autumn McDonald
Autumn McDonald incorporates AI as she teaches students the business of music and the arts. Photo courtesy of Autumn McDonald.

“If I teach an entire semester and don’t have the students engage with AI during my course, then I am doing a disservice to those students,” said McDonald. “I’m not sufficiently preparing them for the world that awaits them, so I am very intentional and thoughtful about having certain assignments in which they are required to use AI.”

As someone teaching the next generation of Black entrepreneurs, McDonald is more concerned with the consequences of not utilizing AI and furthering the technological gap faced by Black communities.

“The fear of being left behind is a valid fear if we look at ways that technology has entered the landscape over time,” she explained. “We know, for example, that there has been a gap between Afro-descendant households and non-Afro-descendant households and their access to laptops in the home. During the pandemic, Black households did not necessarily have access to high-speed Wi-Fi or personal laptops in the same way that white households did. That creates a learning gap, and a gap in access to information. Many of us want to be sure that there is not a similar gap that comes to bear as it pertains to this current landscape of artificial intelligence.”

Being the Bridge

The discussions in the Music Department — and across campus — are emblematic of Howard’s unique position as a research institution. As the only historically Black university with an Research One (R1) classification, Howard serves not only as a leader in science and technology, but also as a bastion of culture and advocacy for people who have historically been left out of the conversation around technological progress, even as they bear the brunt of its unintended effects.

For Dr. Williams, the answer to this balance lies in embracing the community during the development process. For Project Elevate Black Voices, for example, this meant compensating participants, hosting group discussions to hear concerns about technology from the outset, and updating them via newsletter after data set completion.

“The goal is to be a bridge,” said Williams. “At Howard, we are in the community; that’s why we had community activations as a part of the recruiting mechanism. We wanted to make sure we had that touch point because we didn’t want to just be passive, to have people say, ‘you’re just taking our data.’”

Instead, empowerment and ownership are required to ensure the massive potential benefits and risks of AI are shared equitably. For students and faculty alike, this means taking a “knowledge is power” approach. For researchers, it means taking an active role in analyzing how technology affects everyone.

“You naturally have to be in the community talking to people and building relationships. Trust is a big part of that,” said Williams. “I think that once researchers and technologists get away from the desk and [get] outside talking to organizations and communities, you'll find it becomes easier to do.”

This story appears in the Howard Magazine, Summer/Fall 2025 issue.
Article ID: 2421

More In...

Discovery