Would Biden's Proposed AI 'Bill of Rights' Be Effective—Or Just More Virtue Signaling?
Samson Amore is a reporter for dot.LA. He holds a degree in journalism from Emerson College. Send tips or pitches to samsonamore@dot.la and find him on Twitter @Samsonamore.
Last week, the Biden administration released a blueprint for a proposed Artificial Intelligence Bill of Rights. The bill included guidance on how to protect consumers’ data, how to limit discrimination by algorithms and human alternatives to AI use.
But despite its whopping 73 pages, the bill is little more than virtue signaling from an administration keen on cracking down on big tech, according to many in Los Angeles’ AI community.
“Calling it a ‘bill of rights’ is grandiose,” said Fred Morstatter, a computer scientist at USC’s Information Sciences Institute (ISI). “It lacks the ability to make real change from a legal perspective; it’s not a law.” Nonetheless, Morstatter appreciates that the Biden administration is shining a light on issues of “algorithmic bias, biased decision making and fairness,” though he also noted that IBM proposed a similar set of rules a year ago.
Here are a few key takeaways from industries the 73-page proposal targets most.
Previous Legislation Is More Effective
Several states are already debating similar initiatives to regulate AI, but few are close to the finish line.
A Massachusetts bill introduced last July would require companies making over $50 million to undergo impartial “accountability and bias prevention” audits. Vermont proposed a bill last March that would create an advisory group to look for bias in state-used AI software. In Washington state, legislators are considering the People’s Privacy Act, which if passed would limit private firms’ use of people’s data.
Everyone dot.LA spoke with, however, asserts that the most powerful bill regulating AI is a New York City law set to go into effect in January. Unlike the Biden proposal, the NYC bill stipulates that companies found in violation are susceptible to hefty fines. Under this new law, NYC companies would be banned from or fined for using automated hiring platforms if they can’t prove in a yearly independent audit that their AI models are anti-biased and won’t discriminate against job searchers based on gender or race. Though it’s not clearly defined what the audit would entail, the law says an impartial judge will look for “disparate impact” on minorities, a phrase experts are trying to suss out.
“It is high time for regulating high tech, though it is an anathema to business interests,” Kristina Lerman, principal scientist at USC’s ISI, told dot.LA. “When we have technology that has life and death impacts on others, we have to take responsibility for it.”
It Does Little To Fix Issues with AI Workplace Hiring Models
Recent history is littered with examples of AI decision-making gone wrong.
Two years ago in Tippecanoe County, Indiana, a prison used AI to predict recidivism rates.
“They found that it was racist against Black people,” said Shirin Nikaein, co-founder and CEO of Upful.ai, a startup developing an AI coaching tool to make performance reviews less biased. She noted the prison’s AI model failed to account for all the biases in society writ large that contributed to an outsized portion of Black Americans being arrested or incarcerated – and added, “of course, AI is going to discriminate, it's going to amplify the biases that already existed.”
One possible solution to this issue, according to Nikaein, is to give the AI as diverse a dataset as possible, or to have an external audit of the data before it's given to the AI to check for bias. Though, she also admitted this process is typically slower “and it does take more human intervention.”
The White House proposal does recommend AI data sets also be independently audited. Its list of people who should be able to view detailed audit results includes journalists, researchers and inspectors general. Though again, without outlining what the fines for potential violations for misusing AI tools might look like, the Biden proposal is largely ineffective in preventing bias.
It Could Stifle Innovation
Eli Ben-Joseph, co-founder and CEO of Culver City-based Regard, a AI-driven software tool for healthcare workers, said physicians already have to sign off on any AI-determined diagnosis before they’re given to a patient. But Ben-Joseph wants to remove the human training wheels and allow people to use the AI to diagnose themselves.
Which is why he’s concerned the recommendations could turn into over-regulation if companies have to wait for government approval to go to market. Currently, Regard doesn’t need FDA approval to operate because it's not a black-box algorithm since users can see how it makes a diagnosis.
“Overall, a lot of the things [the White House] wrote about are things that I think are concerns that very much should be monitored and addressed by technology,” said Ben-Joseph. “The one hesitation I had, which is I think very standard when you have a government starting to meddle with things, is how much will it stifle innovation?”
That said, two years ago, a team of researchers from the University of California Berkeley’s School of Public Health found a “widely used” hospital AI was racist. To sum it up, the AI determined that Black patients who were sicker than white patients were at the same level of risk. This ended up cutting the number of Black patients who were identified as needing extra care by 50%, and overall contributed to them getting subpar treatment.
This is all to say that Biden’s proposal provides limited guidance on specific actions that healthcare companies should take to avoid such biased determinations.
Lerman said she doesn’t expect there to be much change for both big tech and startups working with AI, “until there is some bite in the regulation… new laws that will allow prosecution of companies who violate the laws.”
- New Bill Aims To Regulate Use of Facial Recognition Tech - dot.LA ›
- LA Emerges as an Early Adopter of Artificial Intelligence - dot.LA ›
- Art Created By Artificial Intelligence Can't Be Copyrighted - dot.LA ›
- Are ChatGPT and Other AI Apps Politically Biased? - dot.LA ›
- How Dot.LA Readers Are Using AI in a Professional Setting - dot.LA ›
- 'Open Letter' to Pause AI Developments as Concerns Rise - dot.LA ›
- From Hype to Backlash: Is Public Opinion on AI Shifting? - dot.LA ›
- RNC Responds to Biden Reelection Bid with AI-Generated Ad - dot.LA ›
Samson Amore is a reporter for dot.LA. He holds a degree in journalism from Emerson College. Send tips or pitches to samsonamore@dot.la and find him on Twitter @Samsonamore.