Would Biden's Proposed AI 'Bill of Rights' Be Effective—Or Just More Virtue Signaling?

Samson Amore

Samson Amore is a reporter for dot.LA. He holds a degree in journalism from Emerson College. Send tips or pitches to samsonamore@dot.la and find him on Twitter @Samsonamore.

Biden and AI robot face off
courtesy of Andria Moore

Last week, the Biden administration released a blueprint for a proposed Artificial Intelligence Bill of Rights. The bill included guidance on how to protect consumers’ data, how to limit discrimination by algorithms and human alternatives to AI use.

But despite its whopping 73 pages, the bill is little more than virtue signaling from an administration keen on cracking down on big tech, according to many in Los Angeles’ AI community.


“Calling it a ‘bill of rights’ is grandiose,” said Fred Morstatter, a computer scientist at USC’s Information Sciences Institute (ISI). “It lacks the ability to make real change from a legal perspective; it’s not a law.” Nonetheless, Morstatter appreciates that the Biden administration is shining a light on issues of “algorithmic bias, biased decision making and fairness,” though he also noted that IBM proposed a similar set of rules a year ago.

Here are a few key takeaways from industries the 73-page proposal targets most.

Previous Legislation Is More Effective

Several states are already debating similar initiatives to regulate AI, but few are close to the finish line.

A Massachusetts bill introduced last July would require companies making over $50 million to undergo impartial “accountability and bias prevention” audits. Vermont proposed a bill last March that would create an advisory group to look for bias in state-used AI software. In Washington state, legislators are considering the People’s Privacy Act, which if passed would limit private firms’ use of people’s data.

Everyone dot.LA spoke with, however, asserts that the most powerful bill regulating AI is a New York City law set to go into effect in January. Unlike the Biden proposal, the NYC bill stipulates that companies found in violation are susceptible to hefty fines. Under this new law, NYC companies would be banned from or fined for using automated hiring platforms if they can’t prove in a yearly independent audit that their AI models are anti-biased and won’t discriminate against job searchers based on gender or race. Though it’s not clearly defined what the audit would entail, the law says an impartial judge will look for “disparate impact” on minorities, a phrase experts are trying to suss out.

“It is high time for regulating high tech, though it is an anathema to business interests,” Kristina Lerman, principal scientist at USC’s ISI, told dot.LA. “When we have technology that has life and death impacts on others, we have to take responsibility for it.”

It Does Little To Fix Issues with AI Workplace Hiring Models

Recent history is littered with examples of AI decision-making gone wrong.

Two years ago in Tippecanoe County, Indiana, a prison used AI to predict recidivism rates.

“They found that it was racist against Black people,” said Shirin Nikaein, co-founder and CEO of Upful.ai, a startup developing an AI coaching tool to make performance reviews less biased. She noted the prison’s AI model failed to account for all the biases in society writ large that contributed to an outsized portion of Black Americans being arrested or incarcerated – and added, “of course, AI is going to discriminate, it's going to amplify the biases that already existed.”

One possible solution to this issue, according to Nikaein, is to give the AI as diverse a dataset as possible, or to have an external audit of the data before it's given to the AI to check for bias. Though, she also admitted this process is typically slower “and it does take more human intervention.”

The White House proposal does recommend AI data sets also be independently audited. Its list of people who should be able to view detailed audit results includes journalists, researchers and inspectors general. Though again, without outlining what the fines for potential violations for misusing AI tools might look like, the Biden proposal is largely ineffective in preventing bias.

It Could Stifle Innovation

Eli Ben-Joseph, co-founder and CEO of Culver City-based Regard, a AI-driven software tool for healthcare workers, said physicians already have to sign off on any AI-determined diagnosis before they’re given to a patient. But Ben-Joseph wants to remove the human training wheels and allow people to use the AI to diagnose themselves.

Which is why he’s concerned the recommendations could turn into over-regulation if companies have to wait for government approval to go to market. Currently, Regard doesn’t need FDA approval to operate because it's not a black-box algorithm since users can see how it makes a diagnosis.

“Overall, a lot of the things [the White House] wrote about are things that I think are concerns that very much should be monitored and addressed by technology,” said Ben-Joseph. “The one hesitation I had, which is I think very standard when you have a government starting to meddle with things, is how much will it stifle innovation?”

That said, two years ago, a team of researchers from the University of California Berkeley’s School of Public Health found a “widely used” hospital AI was racist. To sum it up, the AI determined that Black patients who were sicker than white patients were at the same level of risk. This ended up cutting the number of Black patients who were identified as needing extra care by 50%, and overall contributed to them getting subpar treatment.

This is all to say that Biden’s proposal provides limited guidance on specific actions that healthcare companies should take to avoid such biased determinations.

Lerman said she doesn’t expect there to be much change for both big tech and startups working with AI, “until there is some bite in the regulation… new laws that will allow prosecution of companies who violate the laws.”

https://twitter.com/samsonamore
samsonamore@dot.la

Subscribe to our newsletter to catch every headline.

How Women’s Purchasing Power Is Creating a New Wave of Economic Opportunities In Sports

Samson Amore

Samson Amore is a reporter for dot.LA. He holds a degree in journalism from Emerson College. Send tips or pitches to samsonamore@dot.la and find him on Twitter @Samsonamore.

How Women’s Purchasing Power Is Creating a New Wave of Economic Opportunities In Sports
Samson Amore

According to a Forbes report last April, both the viewership and dollars behind women’s sports at a collegiate and professional level are growing.

Read moreShow less
https://twitter.com/samsonamore
samsonamore@dot.la
LA Tech Week Day 5: Social Highlights
Evan Xie

L.A. Tech Week has brought venture capitalists, founders and entrepreneurs from around the world to the California coast. With so many tech nerds in one place, it's easy to laugh, joke and reminisce about the future of tech in SoCal.

Here's what people are saying about the fifth day of L.A. Tech Week on social:

Read moreShow less

LA Tech Week: Six LA-Based Greentech Startups to Know

Samson Amore

Samson Amore is a reporter for dot.LA. He holds a degree in journalism from Emerson College. Send tips or pitches to samsonamore@dot.la and find him on Twitter @Samsonamore.

LA Tech Week: Six LA-Based Greentech Startups to Know
Samson Amore

At Lowercarbon Capital’s LA Tech Week event Thursday, the synergy between the region’s aerospace industry and greentech startups was clear.

The event sponsored by Lowercarbon, Climate Draft (and the defunct Silicon Valley Bank’s Climate Technology & Sustainability team) brought together a handful of local startups in Hawthorne not far from LAX, and many of the companies shared DNA with arguably the region’s most famous tech resident: SpaceX.

Read moreShow less
https://twitter.com/samsonamore
samsonamore@dot.la
RELATEDEDITOR'S PICKS
Trending