How is a self-driving Tesla car supposed to act when someone vandalizes a roadway speed limit sign? This is one of several questions a team of Howard University computer science researchers are trying to answer as society hurtles into its artificial intelligence future.
Last year, the U.S. Department of Defense (DOD) awarded Howard University $7.5 million to launch the Howard University Center of Excellence in Artificial Intelligence and Machine Learning (CoE-AIML). The project is led by Danda B. Rawat, Ph.D., Howard University professor of computer science in the College of Engineering and Architecture.
While artificial intelligence (AI) and machine learning (ML) have become mainstream buzzwords, Rawat and other scientists will be left to work out the real-world kinks in the systems. Last year, hackers used a two-inch piece of tape on a roadway sign to trick the Tesla autopilot into speeding up – by as much as 50 miles per hour.
Rawat said the research problem will be one of the first his team of doctoral, master's and undergraduate students will investigate with the new DOD grant.
“As a human, if the speed limit sign changes drastically, I would be suspicious,” Rawat says. “Cars should think in the same ways. One piece of tape should not be used to make a decision. The car system needs to make reference to history and context. In human cognition we use multiple sources to make wide decisions.”
Another area Rawat’s team is examining is bias. Though AI/ML algorithms are running in computers and machines, discrimination is showing up – such as in Apple’s automated credit limit approval, which favored men over women.
Rawat said he expected Howard researchers at the newly established CoE-AIML to lead to the creation of more trustworthy, fair and reliable AI systems that could support a wide variety of applications, including the so-called Internet of Things, electronic warfare, counterterrorism, cybersecurity and machine vision.