An interdisciplinary initiative at Rutgers University–New Brunswick looks to harness the power of artificial intelligence to make it more transparent—and a cause for good.

Artificial intelligence (AI) has made aspects of life more convenient and even safer, courtesy of services such as Siri and Alexa. “There are tremendous benefits to AI,” says Fred S. Roberts, Distinguished Professor of Mathematics at the School of Arts and Sciences (SAS) and director of the Command, Control, and Interoperability Center for Advanced Data Analysis (CCICADA). The center is part of the U.S. Department of Homeland Security, in which Rutgers is the lead partner of this university consortium. “We can use facial recognition technology to identify missing children, for instance, or diagnose rare diseases. But you have to keep the trade-offs in mind.”

One of those trade-offs is that the powerful, predictive algorithms fueling everything from facial recognition technology to who gets a bank loan or a traffic ticket can adversely affect your privacy, health, well-being, and personal finances—and are leading to inequities in American society.

“With AI, people tend to worry about things like super-intelligent computers turned evil like in The Terminator,” says Lauren M.E. Goodlad, a professor in the Department of English at SAS and chair of Critical AI, a new interdisciplinary initiative examining the ethics of artificial intelligence. What is worrisome, she says, “is how this technology can be used in an opaque way to manipulate our behavior, as we’ve seen with Facebook, along with other problems that are making our country more unequal than it has been since the Gilded Age.”

“The dangers,” she says, “come with using massive sets of data on a scale that has never been available, coupled with massive computing power to facilitate data-centric machine learning.”

AI, which dates to the 1940s when it was known as machine learning, falls within a technology continuum that emphasizes data-driven decision-making, whether credit card companies deciding to approve a loan or engineers building an autonomous vehicle. “Depending on the inherent subjectivity and perceptions of the algorithm developer and the context in which it is developed, the algorithm may reflect biases that don’t benefit everyone equally,” says Piyushimita Thakuriah, dean of the Edward J. Bloustein School of Planning and Public Policy, where she is also a professor.

Rutgers researchers are determined to change the pattern through projects like Minds and Machines at SAS—a critical AI initiative with a new approach to educating future data scientists. “It’s not enough to just produce fast algorithms,” Roberts says. “We need to build in ethical considerations from the start, being aware of the bias that algorithms can create and the resulting damage they can cause.”

Consider facial recognition technology, which is widely used in policing. In New Jersey alone, the police capture more than 700,000 videos a year, according to Roberts, analyzing them for positive aims such as finding missing children. Yet modern facial-recognition algorithms, which require feeding the database millions or billions of photos of faces, are far from perfect. “They misidentify Black people five to 10 times more than whites—and they are particularly likely to misidentify Black women,” says Roberts.

“The problem is,” says Peter March, executive dean of SAS, where he is a Distinguished Professor of Mathematics, “we’ve trained the computer, maybe inadvertently, to recognize white or male faces because we’ve fed more of those photos into the database. AI is not as good at recognizing faces that don’t look like that.”

Adds Goodlad: “Even if you think it’s a good idea to have facial-recognition systems installed to surveil the population at large—and that’s a question our society hasn’t been given the chance to answer—there’s the added problem of inaccuracy. For instance, do we have enough Black women in our data sets for this technology to work reliably?”

Errors made by facial-recognition technology used by police or immigration services can have tragic consequences, resulting in people being wrongfully arrested. But AI-driven inequities extend into every area of life, from statewide prescription drug databases misidentifying patients as abusers of opioids (and denying them pain medication), to digital redlining (excluding certain people from seeing online ads based on factors such as race, gender, or age), to inflated car insurance premiums for people who live in predominantly minority neighborhoods.

“These things have a real impact on the quality of people’s lives,” says Thakuriah, who worked on a traffic-safety project. “In low-income neighborhoods, the data on accidents was not as complete as it was in high-income neighborhoods. There were just as many accidents in lower-income areas, but people who lived in these areas trusted the police less and were less likely to report them.”

The implications of that kind of error can be dramatic: “This data is used to make billion-dollar investment decisions, such as where to build a Level 1 trauma center,” she says.

That’s one reason experts at Rutgers–New Brunswick are laser focused on data curation—or finding out what data is included and how it’s organized. “There’s not enough emphasis on what is actually in these data sets, much less a careful documentation of them,” says Goodlad.

Still, there are many ways to prevent the abuses of AI and harnessing the technology for the good. Rutgers has introduced introductory undergraduate computer science courses that teach programming within the context of ethics. “The social, legal, and ethical considerations of technology should not be something we consider after the fact,” says March.

Goodlad likens the role of ethics in data science to the Hippocratic Oath in medicine. “Most professions have a guiding ethos, but this is a new idea in data science,” she says. “Besides educating everyone to be aware of the technology and what it’s good at and isn’t, we need to teach those in the field what it means to be an ethical scientist.”

That’s when the technology can be used in a way that benefits everyone. Kristin Dana is a professor at the School of Engineering who researches the growing presence of artificial intelligence, and robotics, which she sees as a positive force in society. In 2020, the National Science Foundation gave Dana and an interdisciplinary Rutgers team a $3 million grant for a five-year project entitled Socially Cognizant Robotics for a Technology Enhanced Society. It will evaluate robotics not only for their performance efficiency such as speed and accuracy, but also for their applications in the real world.

“We are at a point where robotics may soon be part of everyday life and work,” says Dana, “but we want robots to be developed in a way that they can adapt to human needs and desires, rather than the other way around.”

The role of regulation is important, too—a topic that Thakuriah is passionate about. “In this country, we have left the governance of bottom-line outcomes regarding health and the economy to people like Mark Zuckerberg,” she says.

Roberts, who cites regulations in the European Union that require decisions made by algorithms to be explainable, interpretable, and transparent, agrees. “There is room,” he says, “for regulation and societal decision-making with AI.”

Transparency is only possible with the right kind of education. “These algorithms can’t be hidden in a black box; they need to be made available to people who want to see what they are,” he says. “That means training people to make sure they document them in understandable language.”

Ultimately, nobody wants to base our actions on judgments that come from data we don’t understand, whether a medical diagnosis or taxes owed. “AI can explore permutations to a depth and degree that humans are simply not capable of, allowing us to extend our intuition and see patterns we couldn’t possibly see ourselves,” says March. “But the flip side is that we see judgments made and we have no idea why. And if you see a judgment rendered that you don’t understand, and you can’t replicate it, how good is it really?”