Government Should Address Potential Bias in Artificial Intelligence, Lawmakers Say
A pair of congressmen outlined their plans to ensure that systems don’t discriminate—even unintentionally.
Bias in artificial intelligence could critically impact the deployment, adoption and evolution of the technology, Democratic lawmakers said in Washington Wednesday.
They also detailed their plans to combat the issue and help America maintain its position as a global leader in AI.
“We have a real concern about bias in data—is that data bias in some historical way or in some intentional way?” Rep. Jerry McNerney, D-Calif., told Politico’s Steven Overly at an event held by the publication and technology corporation, Intel. “And we want to make sure the data doesn’t harm groups of people or sectors of the country.”
McNerney, who holds a doctorate in mathematics, elaborated on how algorithm developers implicitly assume that whatever they produce will be logical. Often, it isn’t until they look back at the results that they realize they’ve inserted bias through actions like failing to make important considerations during the process or using data that excluded certain groups of people.
“I mean there’s just a tremendous opportunity for bias and bias really affects people’s lives,” he said.
The representative also elaborated on how AI bias can impact vital aspects of individuals’ lives, like their ability to be approved for loans or the health care and insurance they receive. He also said that as the government chooses to open up datasets to the public, it needs to ensure the data that’s released is “good.”
“It’s not just the algorithms that produce bias, it’s the data that's even more likely to produce bias,” he said.
In a separate conversation with Vice President of Intel’s Data Center Group Trish Damkroger, Sen. Martin Heinrich, D-N.M., reiterated that data and AI bias is something federal officials need to invest serious attention and resources in going forward.
“If you are going to be using public data and especially for public purposes, then we shouldn't have a black box sort of AI system,” Heinrich said.
He explained that when it comes to using AI and machine learning in personal dealings like financial services (when people apply for credit cards or home mortgages), it’s imperative to ensure that bias isn’t baked into the system, datasets or how the algorithm is written.
“We need to know why an individual or machine learning is producing the results that it's producing,” he said.
Both lawmakers also made recommendations on how to tackle these issues around bias. McNerney said that while the administration made a move in the right direction by producing a national AI strategy, it’s missing clear specifics around funding and standards. He said the government should also issue guidance on how to best leverage good data.
McNerney introduced the AI in Government Act to help agencies produce and execute their own plans to effectively leverage the budding tech. A companion bill was also introduced in the Senate.
Also in the Senate, Heinrich and a bipartisan group of lawmakers introduced the Artificial Intelligence Initiative Act, which creates a federal investment of $2.2 billion over five years to “accelerate the responsible delivery” of AI. The legislation also establishes a National AI Coordination Office, which Heinrich said aims to unite experts from the National Science Foundation, National Institute of Standards and Technology, and others to establish appropriate standards.
“We haven’t had that coordination up until now,” he said. “That’s one of the things our legislation does is try to force that coordination across the federal government without necessarily dictating or micromanaging who and where and how.”
The senator’s concerns come only a month after technology experts warned Congress that the government’s got to quickly address how AI and machine learning systems are reproducing historical patterns of discrimination.
Politico noted that Republican lawmakers were also invited to speak at the event but chose not to attend.