Law Professor Authors Report Guiding Federal AI Regulation

Ellen Goodman standing in a school building
Distinguished Professor Ellen Goodman led a federal initiative to ensure artificial intelligence is trustworthy and safe.
Ron Downes Jr.

A Rutgers law professor has been leading a federal initiative to ensure that artificial intelligence used in a growing number of sectors including education, employment, finance, and health care is trustworthy and safe.

Distinguished Professor Ellen P. Goodman worked with a team at the National Telecommunications and Information Administration (NTIA) for about a year, seeking input from industry and the public on ways government can appropriately guide and regulate the use of AI. The Artificial Intelligence Accountability Policy Report she authored was released March 27.

“AI is a tool like software is a tool, electricity is a tool and transportation systems are tools,’’ said Goodman, who is also the co-director of the Rutgers Institute for Information Policy and Law.

The report calls for improved transparency into AI systems, independent evaluations to verify the claims made about these systems, and consequences for imposing unacceptable risks or making unfounded claims, according to a press release.

“AI is being incorporated into all kinds of applications and processes like employment, health care, criminal justice, information and legal services, and transportation and climate response,’’ Goodman said. “These tools are here and they are spreading so the question is, how can they be managed in a way to maximize the benefits and mitigate the risks?”

NTIA sits in the U.S. Department of Commerce and advises the president on telecommunications and information policies.

She said the concerns over the use of AI fall into two general categories:  whether the technology is being deployed appropriately, without discriminating or violating privacy or other laws, such as copyright; and secondly, the potential negative consequences of AI on people’s lives, such as job displacement and environmental impacts.

“Both sets of concerns require policies,” Goodman said. “Policies that mitigate AI risks, make AI tools accessible where they are needed, and ensure humans are protected and safe.”

There are numerous advantages to AI, according to NTIA. In health care, AI can detect disease early and offer the most effective treatment option. In finance, AI can recognize fraudulent activities or help in personal investment. In education, AI can assist in individualized learning by helping identify students’ strengths or weaknesses.

But there are also disadvantages to using AI. Hidden biases in the mortgage approval process mean higher denial rates in communities of color; algorithms of AI hiring systems have shown to be non-compliant with federal laws. Or so-called “deep fake” videos created using AI can hurt individuals or spread misinformation and deceive the public.

“Responsible AI innovation will bring enormous benefits, but we need accountability to unleash the full potential of AI,” said Alan Davidson, Assistant Secretary of Commerce for Communications and Information and NTIA administrator.

“NTIA’s AI Accountability Policy recommendations will empower businesses, regulators, and the public to hold AI developers and deployers accountable for AI risks, while allowing society to harness the benefits that AI tools offer.”

Deployed properly AI will create new economic opportunities and help tackle some of the big societal challenges we face like climate change, Davidson has said.

“As has happened throughout history, technologies come and displace some jobs and then create new ones,” Goodman said. “That takes policy [to guide the change]. Also, on the other side for AI behaving badly, that takes policy and governmental actions. So, in both cases, we have to intervene, but I don't think you can stand in the way of advancing technology and yell stop.”