Would Biden's Proposed AI 'Bill of Rights' Be Effective—Or Just More Virtue Signaling?

Samson Amore

Samson Amore is a reporter for dot.LA. He holds a degree in journalism from Emerson College and previously covered technology and entertainment for TheWrap and reported on the SoCal startup scene for the Los Angeles Business Journal. Send tips or pitches to samsonamore@dot.la and find him on Twitter @Samsonamore.

Biden and AI robot face off
courtesy of Andria Moore

Last week, the Biden administration released a blueprint for a proposed Artificial Intelligence Bill of Rights. The bill included guidance on how to protect consumers’ data, how to limit discrimination by algorithms and human alternatives to AI use.

But despite its whopping 73 pages, the bill is little more than virtue signaling from an administration keen on cracking down on big tech, according to many in Los Angeles’ AI community.

“Calling it a ‘bill of rights’ is grandiose,” said Fred Morstatter, a computer scientist at USC’s Information Sciences Institute (ISI). “It lacks the ability to make real change from a legal perspective; it’s not a law.” Nonetheless, Morstatter appreciates that the Biden administration is shining a light on issues of “algorithmic bias, biased decision making and fairness,” though he also noted that IBM proposed a similar set of rules a year ago.

Here are a few key takeaways from industries the 73-page proposal targets most.

Previous Legislation Is More Effective

Several states are already debating similar initiatives to regulate AI, but few are close to the finish line.

A Massachusetts bill introduced last July would require companies making over $50 million to undergo impartial “accountability and bias prevention” audits. Vermont proposed a bill last March that would create an advisory group to look for bias in state-used AI software. In Washington state, legislators are considering the People’s Privacy Act, which if passed would limit private firms’ use of people’s data.

Everyone dot.LA spoke with, however, asserts that the most powerful bill regulating AI is a New York City law set to go into effect in January. Unlike the Biden proposal, the NYC bill stipulates that companies found in violation are susceptible to hefty fines. Under this new law, NYC companies would be banned from or fined for using automated hiring platforms if they can’t prove in a yearly independent audit that their AI models are anti-biased and won’t discriminate against job searchers based on gender or race. Though it’s not clearly defined what the audit would entail, the law says an impartial judge will look for “disparate impact” on minorities, a phrase experts are trying to suss out.

“It is high time for regulating high tech, though it is an anathema to business interests,” Kristina Lerman, principal scientist at USC’s ISI, told dot.LA. “When we have technology that has life and death impacts on others, we have to take responsibility for it.”

It Does Little To Fix Issues with AI Workplace Hiring Models

Recent history is littered with examples of AI decision-making gone wrong.

Two years ago in Tippecanoe County, Indiana, a prison used AI to predict recidivism rates.

“They found that it was racist against Black people,” said Shirin Nikaein, co-founder and CEO of Upful.ai, a startup developing an AI coaching tool to make performance reviews less biased. She noted the prison’s AI model failed to account for all the biases in society writ large that contributed to an outsized portion of Black Americans being arrested or incarcerated – and added, “of course, AI is going to discriminate, it's going to amplify the biases that already existed.”

One possible solution to this issue, according to Nikaein, is to give the AI as diverse a dataset as possible, or to have an external audit of the data before it's given to the AI to check for bias. Though, she also admitted this process is typically slower “and it does take more human intervention.”

The White House proposal does recommend AI data sets also be independently audited. Its list of people who should be able to view detailed audit results includes journalists, researchers and inspectors general. Though again, without outlining what the fines for potential violations for misusing AI tools might look like, the Biden proposal is largely ineffective in preventing bias.

It Could Stifle Innovation

Eli Ben-Joseph, co-founder and CEO of Culver City-based Regard, a AI-driven software tool for healthcare workers, said physicians already have to sign off on any AI-determined diagnosis before they’re given to a patient. But Ben-Joseph wants to remove the human training wheels and allow people to use the AI to diagnose themselves.

Which is why he’s concerned the recommendations could turn into over-regulation if companies have to wait for government approval to go to market. Currently, Regard doesn’t need FDA approval to operate because it's not a black-box algorithm since users can see how it makes a diagnosis.

“Overall, a lot of the things [the White House] wrote about are things that I think are concerns that very much should be monitored and addressed by technology,” said Ben-Joseph. “The one hesitation I had, which is I think very standard when you have a government starting to meddle with things, is how much will it stifle innovation?”

That said, two years ago, a team of researchers from the University of California Berkeley’s School of Public Health found a “widely used” hospital AI was racist. To sum it up, the AI determined that Black patients who were sicker than white patients were at the same level of risk. This ended up cutting the number of Black patients who were identified as needing extra care by 50%, and overall contributed to them getting subpar treatment.

This is all to say that Biden’s proposal provides limited guidance on specific actions that healthcare companies should take to avoid such biased determinations.

Lerman said she doesn’t expect there to be much change for both big tech and startups working with AI, “until there is some bite in the regulation… new laws that will allow prosecution of companies who violate the laws.”


Subscribe to our newsletter to catch every headline.


How Real-Time Data Is Helping Physicians Track Their Patients, One Heartbeat at a Time

S.C. Stuart
S.C. Stuart is a foreign correspondent (ELLE China, Esquire Latin America), Contributing Writer at Ziff Davis PCMag, and consults as a futurist for Hollywood Studios. Previously, S.C. was the head of digital at Hearst Magazines International while serving as a Non-Executive Director, UK Trade & Investment (US) and Digital Advisor at The Smithsonian.
How Real-Time Data Is Helping Physicians Track Their Patients, One Heartbeat at a Time

Are you a human node on a health-based digital network?

According to research from Insider Intelligence, the U.S. smart wearable user market is poised to grow 25.5% in 2023. Which is to say, there are an increasing number of Angelenos walking around this city whose vital signs can be tracked day and night via their doctor's digital device. If you've signed up to a health-based portal via a workplace insurance scheme, or through a primary care provider's portal which utilizes Google Fit, you’re one of them.

Do you know your baseline health status and resting heartbeat? Can you track your pulse, and take your own blood pressure? Have you received genetic counseling based on the sequencing of your genome? Do you avoid dairy because it bloats, or because you know you possess the variant that indicates lactose intolerance?

Read moreShow less

Who Will Win LA's E-scooter Wars?

Maylin Tu
Maylin Tu is a freelance writer who lives in L.A. She writes about scooters, bikes and micro-mobility. Find her hovering by the cheese at your next local tech mixer.
Who Will Win LA's E-scooter Wars?
Evan Xie

Los Angeles — it’s not just beautiful weather, traffic and the Hollywood Walk of Fame — it’s also the largest shared micromobility market in the U.S. with six operators permitted to deploy up to 6,000 vehicles each.

And despite the open market policy, the competition shows no signs of slowing down.

Read moreShow less