OpenAI’s ChatGPT has been hailed as the revolutionary generative tool with the potential to break down barriers and promote cross-cultural understanding. With approximately 13 million unique visitors using ChatGPT each day in January 2023, the platform and similar technologies has reshaped our communication, healthcare, government, schooling, and business systems. With just a click, generative AI platforms can schedule focus time, construct a draft budget, predict travel plans, or compose images from text.
As a professor with dyslexia, I’ve harnessed the power of ChatGPT and other AI technologies to assist in editing course notes, automatically generate text from audio, and craft an extensive test bank filled with questions tailored to my instructional materials. However, despite these advancements, I remain mindful that I am the instructor and AI is just one of many tools helping me communicate.
The reason is because artificial intelligence models and other content-generating tools fail to grasp the historical diverse nuances of human opinion, thought, language, and experience. Many lack the historical knowledge that grounds the foundations of science and technology in the African world: The computational data collection power of the ishango bone, the fractal patterns in kente weaving, and the binary logic embedded in the Ifá system. ChatGTP and other content-generating tools engage in the unauthorized collection of data, commonly referred to as “scraping,” from many undisclosed sources to generate content or develop products. Frequently, AI technologies echo dominant viewpoints of certain groups, spread misinformation, or simply generate inaccurate and unknown information as facts, especially within the African diaspora.
For example, a system named DALL-E 2, which can create realistic images and art from a description in natural language, failed to depict Founders Library, kente cloth, or the faces of future Howard University students.
AI tools are integrated into many common social media platforms (Twitter, IG, Snapchat), professional office tools (Microsoft 365, Google Docs), and general applications (resume builders, mortgage applications), under the guise of efficiency and productivity. Without guidance, guardrails, and authentic engagement, our communities face what Safiya Noble, PhD, author of “Algorithms of Oppression: How Search Engines Reinforce Racism,” refers to as technological redlining and algorithmic oppression.
The broad and extensive ramifications of AI technologies on marginalized communities are noteworthy. If these technologies were developed to support us, they can be deployed to imagine a world without food insecurity, toxic drinking water, mass incarceration, infant mortality, or elementary school dropouts. Yet, there is not enough data on our communities to model such a reality. The data world has yet to substantially invest in such projects.
A collaborative team of Howard faculty and staff members are formulating preliminary guidelines on the use of generative AI tools. Everyone should develop an expansive set of AI literacies in order to take responsibility for telling our stories, to protect our data, and to promote an ethic of transparency as the world uses these tools.
Article ID: 1596