Web Accessibility Support
Expressions

AI Is Now

Artificial Intelligence is well on its way to becoming the defining technology for decades to come. It's changing the way we live, work, and communicate — but were exactly is it taking us?

by Ingrid Sturgis
Ingrid Sturgis

Ingrid Sturgis is department chair and an associate professor specializing in new media in the Department of Media, Journalism and Film in the Cathy Hughes School of Communications.

By all accounts, artificial intelligence is poised to become the defining technology over the next few decades, reshaping how we live, work, and communicate. Yet despite the bold predictions, breathless coverage, and massive investments, no one truly knows where this technology will take us.

In the three years since ChatGPT and other large language models (LLMs) entered the mainstream, public sentiment has swung between exhilaration, dread, and skepticism, all amplified by media hype. In fact, some call it “AI washing,” exaggerated claims of AI’s usefulness or value.

Generative AI tools such as ChatGPT, Claude, Bing Chat, Pi, Google Gemini, and Perplexity rely on enormous datasets culled from books, websites, and articles to generate text, video, and audio. Image-focused tools like Dall-e and Midjourney extend this capability into visual creation. Together, these technologies have sparked a global race to harness, regulate, or at least mitigate their impact on nearly every aspect of modern life — from education and media to employment, privacy, and creativity. The question is not whether AI will shape the future — it already is. The deeper questions are whether the promises made by its loudest champions will hold up under scrutiny, or if AI will follow the path of other overhyped technologies that eventually fizzled.

So-so Technology?

Tech leaders, economists, and scholars present dramatically different visions for AI. Microsoft co-founder and philanthropist Bill Gates has declared that the rise of AI is “as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone.” In his words, AI’s advances will render humans obsolete “for most things.” Gates also said the advances will have no limits and “intelligence will be free,” a frightening concept for humans right now who may have to contend with the loss of entire career fields and the need to reskill in the light of these advances.

Not everyone shares Gates’s enthusiasm, Sociologist Tressie McMillan Cottom calls AI “a mid technology” arguing that it’s more about hype than genuine transformation. Mid technology tools, she says, rarely live up to their disruptive billing. They are often deployed in ways that benefit corporations — through layoffs and cost cutting — more than they serve the public good.

Economist Daron Acemoglu echoes this skepticism, warning that many current AI applications are “so-so technologies,” used to justify staff reductions without delivering significant productivity gains or improved services. Meanwhile, consulting firms like McKinsey publish upbeat analyses about AI’s potential to unlock trillions of dollars in economic growth, even as industries such as media and higher education grapple with disruption, layoffs, and existential uncertainty. The result is a public discourse marked by extremes: utopian visions on one hand, apocalyptic warnings on the other.

History shows why some skepticism may be warranted. In 1999, the Y2K bug was feared as a digital apocalypse that would crash computer systems globally because a software flaw failed to program computers past 1999. Yet, Y2K passed, thanks to the efforts of programmers, with minimal disruption. In the 2010s, MOOCs — massive open online courses — promised to shake the very foundation of higher education by making it easy for one professor to teach millions of students at a time. The economies of scale they touted ultimately were found unfeasible because of high dropout rates that undermined their promise of equity and cost savings. More recently, Facebook bet $40 billion on immersive virtual worlds in the Metaverse, and even changed its name to Meta, only to abandon its pivot as public interest waned. It instead redirected its focus to AI.

But unlike MOOCs or the Metaverse, AI is already deeply embedded in our lives. Since AI-enabled virtual assistants such as Apple’s Siri and Amazon’s Alexa were launched in 2011 and 2014, respectively, AI has become a constant companion—visible in some cases, invisible in others. Today, Siri, Alexa, and Google Assistant help users to streamline a variety of tasks. For example, search engines filter and rank the information we see. Social media platforms like X (formerly Twitter), Bluesky, TikTok, and LinkedIn rely on algorithms to keep users scrolling and engrossed. In transportation, AI powers Tesla’s autonomous features, as well as Google Maps, Apple Maps, and Waze. Health care systems use the technology for diagnostics, predictive analytics, and disease tracking. Businesses use AI to forecast demand, optimize marketing, and monitor operations. AI’s ubiquity underscores why debates about regulation, equity, and ethics are more urgent than ever.

Nowhere are the benefits and dangers of AI more evident than in the classroom. A recent Massachusetts Institute of Technology study found that students using ChatGPT to draft SAT essays became less curious about their topics, retained less of what they wrote, and relied on formulaic, recycled ideas. Researchers called this “cognitive atrophy”— the erosion of curiosity and critical thinking skills. The Digital Education Council warns this could produce shallow learning, with students falling prey to misinformation and over reliance on machine-generated content.

Many educators see AI as a potential ally, capable of making learning more personalized, improving outcomes, and preparing students for a tech-driven future. These benefits, however, will only materialize if schools confront major obstacles: equitable access, data privacy, and ethical safeguards. Without those protections, independent news organization Truthout warns that we risk replacing “the emotional and intellectual process of teaching and learning with a mechanical process of content delivery, data extraction, and surveillance masquerading as education.”

AI also threatens journalism’s fragile economic ecosystem. Google’s AI-powered summaries, which provide instant answers to searchers, have already siphoned traffic from original news sites, triggering layoffs at outlets like Business Insider. This shift deepens the industry’s reliance on tech giants and risks undermining the financial models that sustain quality reporting as news sites negotiate terms of content usage individually instead of as a collective. Meanwhile, companies like OpenAI, Meta, and Google face lawsuits for scraping copyrighted material — books, articles, and personal data — to train their models. In filmmaking, studio owner and filmmaker Tyler Perry said he was putting his multimillion-dollar studio expansion on hold after a demonstration of OpenAI’s Sora video generator. Anticipating the impact of AI technology, jobs for sound engineers, voice actors, visual effects artists, and post-production professionals are already being affected.

Beyond media and education, AI raises broader societal concerns. If generative AI is the present challenge, artificial general intelligence (AGI) represents the near-future fear. AGI, which doesn’t yet exist, but is forecast to roll out in 2026, would mean machines capable of human-level intelligence across multiple domains. Its theoretical arrival stirs everything from utopian dreams of superintelligence solving humanity’s problems to dystopian warnings about human obsolescence. In this, the fourth industrial revolution, AI threatens to automate millions of high-level white collar and technology jobs as well as service and clerical jobs, including gateway jobs for those without a college degree, which would widen the digital divide.

Algorithmic bias disproportionately harms Black communities, prompting policymakers and advocates like Mutale Nkonde, president of AI For the People, to push for stronger oversight. She also seeks to develop racial literacy in tech to assess the role that technology products and corporate practices may have in perpetuating structural racism. Despite the hype, mainstream AI adoption still faces significant roadblocks, including expensive initial investments, potential cybersecurity risks, uncertain return on investment, and environmental concerns such as large-scale AI installations hoarding energy or depleting natural resources.

With all the hyperbole around this new technology, how can we properly vet the matrix between the hype and hope of AI? In preparing to teach a course called Reporting on Innovation and Technology, I suggest we apply the Baldwin Test, developed by Emily Tucker and her colleagues at the Center on Privacy and Technology at Georgetown Law. Based on James Baldwin’s essay “Why I Stopped Hating Shakespeare,” it offers a human-centered counter narrative to the Turing Test, which evaluates whether machines can think like humans. The Baldwin Test provides guidelines to evaluate the usefulness and the impact of AI:

  • Be specific about how it works.
  • Identify where corporate secrecy blocks understanding.
  • Name the corporations behind it.
  • And most importantly, attribute decisions to people, not the technology itself.

Combined with Neil Postman’s technoskeptic framework — asking who benefits, who is harmed, what is lost, and what unintended changes arise. This approach helps cut through hype and ensures students can engage with AI critically.

As computer scientist Timnit Gebru reminds us, “AI is not just about building technology; it’s about building technology that interacts with people and societies in a positive way.” The challenge for educators, the media, and policymakers is not just to race toward AI adoption, but to shape it in ways that serve the broader public good rather than the narrow interests of the powerful. Whether AI becomes a tool for empowerment or exploitation depends on the choices we make now.

This story appears in the Howard Magazine, Summer/Fall 2025 issue.
Article ID: 2371

More In...

Expressions