In Los Angeles, the cameras are everywhere. Cameras at traffic lights. Cameras on doorbells. Cameras on billions of smartphones. When your photo is snapped by these cameras, facial recognition technology can match your face to a database of millions of mug shots, potentially linking you to a crime.
Is this legal? Is this fair? Is this right?
These questions loom large over the technology, which the Los Angeles Police Department has been using since 2009. In November, an investigation by BuzzFeed News found that the LAPD had used the tech 30,000 times in the last decade, including using the controversial "Clearview AI," which trawls the internet for social media photos. Activists, furious over the investigation's findings, sought a ban on the tech. In January, the LAPD adopted what's effectively a "compromise" policy that prohibited the use of Clearview AI and other third party databases of photos, but allowed them to use Facial Recognition Technology (FRT) with their own in-house database of mugshots.
Flash forward six months. After road-testing the system, the LAPD said it's an effective tool that's being used with restraint, rapidly speeding up the time it takes to scroll through mug shots and helping to catch crooks. Activists say it should be forbidden, and that it disproportionately impacts communities of color.
"You have to look at the broader context, and where it fits in the broader 'stalker state,'" said Hamid Khan, founder of the Stop LAPD Spying Coalition. "This is not a moment in time, but a continuation of history."
The roots of the "stalker state," according to Khan, go back to the Lantern Laws of the 18th century, when Black people were required to carry lanterns after dark. Since then, we've seen a number of policies that have disproportionately targeted Black and Latinos, ranging from New York City's "stop and frisk" to the Department of Homeland Security's more recent "Suspicious Activity Reporting" program (a partnership between federal and local law enforcement), which allows anyone to report perceived sketchy behavior to the authorities. One audit found that Black people were reported in 21% of these "suspicious activities," even though they only represent 8% of Los Angeles County.
Activists worry FRT takes a pattern of discrimination and merges it with the brutal efficacy of surveillance tech.
"The danger now is that you're going to subject certain neighborhoods, certain people, and certain religious groups to this constant ever-present surveillance," said John Raphling, a senior researcher on criminal justice for Human Rights Watch. Raphling said that the Fourth Amendment, as established in 1979's Supreme Court case Brown v. Texas, means that the police can't simply waltz up to you and demand to see your ID for no reason.
"With FRT technology, that's out the window," said Raphling. "You're being identified at all times — who you are, what you're doing, who you're associating with." His concern is not just FRT itself, but the broader apparatus of sophisticated law enforcement – predictive analytics and data crunching from the photos, as now "you can't go out in public life without being under this surveillance."
The tech has been accused of racial bias, as research suggests the algorithms powering facial recognition lead to a higher chance of false matches for minorities and women. In one cheeky experiment, the ACLU used Amazon's facial recognition software ("Rekognition," which is not the software used by the LAPD) to compare the headshots of Congress with a database of mugshots, and they found that a whopping 39% of the false matches came from representatives who were people of color, even though they constitute just 20% of Congress.
The technology employed by the LAPD ignores pigmentation, according to an officer who oversees it, instead digitally mapping the face by looking at things like the distance between the eyes, or the distance from the nose to mouth.Shutterstock
Bita Amani, part of the Center for the Study of Racism, Social Justice, and Health, adds that constant surveillance likely poses an underappreciated health risk to marginalized communities, and that even if the facial recognition is flawless and accurate, it's just "strengthening and expanding the powers of the system that already targets the Black and the poor, and the people at the margins."
The police, of course, see all of this quite differently.
"This is not a sole identification tool. Ever," said Captain Christopher Zine of the LAPD. "This is basically a digital mug book." In the old days, you'd need to flip through stacks of photos and try to eyeball a match. It's slow. It's tedious. Now the system takes a photo and then queries it against the database Los Angeles County Regional Identification System database (LACRIS), which contains 7 million photos from 4 million people. (The LAPD clarified that the photos come from decades of arrests, and include non-L.A. residents.)
Lieutenant Derek Sabatini heads up the LACRIS system. He is well aware of the concerns over bias, but suggested that facial recognition technology, in a certain sense, can be employed to reduce the role of implicit bias. If humans do indeed harbor implicit biases, maybe tech can help inject objectivity?
In the traditional use of a photo, said Lt. Sabatini, "you might look at a male Hispanic and then filter that search" based on race or gender. But the FRT works differently. (The department prefers the term "PCT", for Photo Comparison Technology.) Sabatini said that the PCT employed by the LAPD ignores pigmentation, and instead digitally maps the face by looking at things like the distance between the eyes, or the distance from the nose to mouth.
Sabatini gives an example. One time the cops were trying to catch someone who was stealing packages off porches. They had a photo of a tattooed individual, and just from a casual glance, it appeared to be an Hispanic man. When they zapped the photo through the database, it was found to be an Hispanic woman, whom they arrested and charged in court. Sabatini said the facial recognition technology "actually takes away any bias in the user and just kind of goes, 'here's what's best, based on what you're providing me.'"
Some of the tension — and apprehension — seems to be a conflation between what's possible and what is actually being done. The activists fear the worst ("look at the history of the criminal justice system," said Khan) and the cops insist they are following a reasonable protocol.
"One of the big misconceptions is surveillance," said Sabanti, who explains that live feeds (such as continuous footage from an elevator camera) are not being dumped into the LAPD's records and then later mined for algorithmic dark sorcery. "You can't just have live feeds going through a system," he said. "We don't have the capability of that, and it would be against the law."
The department is also forbidden from using third-party photo databases or tools like Clearview AI. Every photo needs to be legally obtained, and to help solve a crime.
Captain Zine said that since the January protocols were enacted, the department created additional processes to ensure that only their own LACRIS database is being used, that extensive training is in place, and that only a small subset of the LAPD even has access to the tool. As for any official numbers, or quantified results and updates? This is still TBD. Zine said the LAPD is still conducting an internal review of FRT's effectiveness, and declined to provide numbers before that's finished (which he expects will be in September).
Critics like Khan, Raphling and Amani think that this middle ground is not enough, and that the potential for abuse — and the troubling history of discrimination — is itself reason enough to ban the tech. Khan points to reports that the LAPD sought photos from Ring doorbell cameras during the Black Lives Matter protests, as well as a high-profile false arrest in Detroit, although he is not aware of any specific abuses of the system, or examples of discrimination or misuse since the January protocol went into effect. The concerns seem to be more about the lurking threat of the ever-more-powerful "Stalker State" technology, as opposed to the more narrow use of the "digital mug book."
Others remain deeply skeptical. "Their argument is 'just trust us,'" said Raphling, arguing that law enforcement has a history of saying "we use it in this very minimal way," but that "it turns out they were using it vastly more." He added, more bluntly, "we would be suckers to trust them again."
Sabanti said he understands the broader concerns around a creepy, "Black Mirror"-esque surveillance state. "That stuff scares us as much as it scares the public. I don't want that," he said with a laugh. "I think we're all on the same team, and people forget that."
Lead image by Ian Hurley.
Correction: An earlier version of this post mis-spelled Hamid Khan's name.
- How Can L.A. Tech Promote More Diversity in Its Ranks? - dot.LA ›
- Unarmed CEO Tony Rice II Developed His Startup - dot.LA ›
Venice-based Trueface is the latest computer vision startup to be snapped up by a Virginia company that sells security technology to airports across the country.
The company, called Pangiam, now has access to Trueface's suite of software powering contactless temperature checks and social distancing compliance monitoring. Last year, it installed AI-powered kiosks at U.S. Air Force bases to recognize individuals without person-to-person contact.
As air travel picks up, Pangiam is gearing up to pitch airports on technology that lets travelers check in for flights and board planes without so much as a boarding pass.
The deal — for how much Pangiam wouldn't say — comes as airlines watch ticket sales soar again. Air travel over Memorial Day Weekend surpassed any other period during the pandemic, according to data from the Transportation Security Administration.
"Adding Trueface's technology solutions to Pangiam's offerings comes at a perfect time, as travel is poised to continue to rebound and passengers want reassurances that the highest health and safety protocols are being followed," Kirk Konert, a partner at Pangiam's parent company AE Industrial Partners, said in a statement.
Trueface did not immediately respond to a request to comment.
It's not the first move Pangiam has made to tie health and ticket screening together through biometric technology.
In March, the company bought a facial recognition system called veriScan, which is used by 40 airlines to check in passengers before a flight.
In a statement announcing that acquisition, Pangiam said the technology allows a person's face to "serve as both their passport and, for many airlines, their boarding pass."
The industry built around biometric technology, which uses fingerprints and facial scans to identify people, is steeped in controversy. Some are calling on Congress to set boundaries around its use. On Tuesday, Seattle's King County became the first to ban administrative offices — including the Sheriff's Department — from using facial recognition technology. One Seattle City Council member cited "distinct threats" the tech could pose to residents, including "potential misidentification, bias and the erosion of our civil liberties."
Elsewhere, biometric checks are seen as a way to automate and speed up routine processes such as boarding a flight. At airports including LAX, passengers can board planes at some gates by walking through a scanner that runs on biometric technology built by the U.S. Customs and Border Protection.
Trueface has raised about $4.4 million in venture capital since it was founded in 2013, according to Pitchbook data. Co-founders Shaun Moore and Nezare Chafni got their start in facial recognition technology by designing a smart doorbell system called Chui before pivoting to focus on software.
Both will serve "key leadership positions" within Pangiam, according to a statement.
Amazon may have halted the sale of its facial recognition software to police, but the move hasn't eased pressure on the tech giant.
In a letter sent to its CEO Jeff Bezos on Tuesday, Democratic Congressman Jimmy Gomez (D-Calif) blasted Amazon's handling of its software, Rekognition, calling on the company to provide detailed info about privacy and bias inherent in the program.
Amazon could not be immediately reached for comment.
But, the letter comes on the heels of Amazon's announcement that it banned police use of the surveillance software for a year so that Congress has time to place stricter regulations on the technology, a move it supports. Microsoft placed a similar moratorium on their facial recognition technology and IBM dropped theirs altogether citing worries about violating basic human rights and freedoms.
An image from Amazon Rekognition's online developer guide.
Gomez, who represents Los Angeles and sits on the House Oversight and Reform Committee, called Amazon's move nothing more than "performative."
"Corporations have been quick to share expressions of support for the Black Lives Matter movement following the public outrage over the murders of Black Americans like George Floyd at the hands of police," he wrote. "Unfortunately, too many of these gestures have been performative at best. Calling on Congress to regulate facial recognition technology is one of these gestures."
The letter was another salvo in what Gomez characterizes as a two-year long effort to get the e-commerce giant to divulge information about how widespread use of the surveillance software is and how data is collected.
"After two years of formal congressional inquiries – including bicameral letters, House Oversight Committee hearings, and in-person meetings – Amazon has yet to adequately address questions about the dangers its facial recognition technology can pose to privacy and civil rights, the accuracy of the technology, and its disproportionate impact on communities of color," Gomez told Bezos.
The issue has played out for years in the Los Angeles communities Gomez represents. Activists regularly object to the use of technology that has the potential to exacerbate racial bias and impede on privacy. The issue exploded anew on the national stage in the aftermath of the George Floyd protests.
Gomez told Politico last week he's drafting legislation that would place restrictions on local and state police from using the technology.
Read Gomez's full letter below:
Dear Mr. Bezos:
On June 10, Amazon announced a one-year moratorium on police use of its facial recognition technology, Rekognition. In a statement, your company said it supports federal regulation for facial recognition technology and "stand[s] ready to help if requested." In the spirit of that offer, I write to request information on the implementation of the moratorium, and resubmit a list of questions I have asked your company over the course of nearly two years on public safety and civil rights concerns associated with Amazon's facial recognition technology – questions that have largely gone ignored or woefully unaddressed.
While I am encouraged by the direction Amazon appears to be taking on this issue, the ambiguity of the announcement raises more questions than answers. For example, the 102-word blog post announcement fails to specify whether Amazon will stop selling Rekognition to police departments during the moratorium; whether the company will stop the development of its facial recognition system during the moratorium; whether the moratorium would encompass both local and federal law enforcement agencies beyond the police, such as the Department of Homeland Security (DHS) and Immigration and Customs Enforcement (ICE); whether the moratorium applies to current contracts with law enforcement agencies; and whether Amazon plans to submit their technology to the National Institute of Standards and Technology (NIST) for testing before it resumes operations. I am also troubled by the one-year expiration of the moratorium and how Amazon will proceed in the event federal legislation is not signed into law within this self-imposed timeframe.
After two years of formal congressional inquiries – including bicameral letters, House Oversight Committee hearings, and in-person meetings – Amazon has yet to adequately address questions about the dangers its facial recognition technology can pose to privacy and civil rights, the accuracy of the technology, and its disproportionate impact on communities of color. Below is a representative, non-exhaustive list of questions I have asked Amazon regarding your company's facial recognition policies, and its decision to market it and sell it to law enforcement agencies. I look forward to your prompt and public engagement on these matters.
Information on any internal accuracy or bias assessments performed on Rekognition, and the results for race, gender, skin pigmentation, and age. Requested on November 29, 2018.
Further information on why – despite Amazon's recommend use of Rekognition at a 95% confidence threshold – it sells the product to law enforcement agencies and departments with an option to operate the software at the default 80% threshold. Requested on February 6, 2019; February 27, 2019; and September 26, 2019.
Information fully responsive to my question on whether Amazon built protections into the Rekognition system to protect the privacy rights of innocent Americans. Requested on November 29, 2018.
Details regarding mechanisms – if any – built into Recognition that allow for the automatic deletion of unused biometric data. Requested on November 29, 2018.
Clarification on whether Amazon conducts any audits of Rekognition use by law enforcement to ensure that the software is not being abused for secretive government surveillance. Requested on February 6, 2019; and February 27, 2019.
Answers regarding reports that Amazon is engaged in surveillance partnerships with over 1,350 police departments across the United States. Requested on February 6, 2019; and February 27, 2019.
Records and information related to all law enforcement or intelligence agencies that Amazon has contracted or otherwise communicated with regarding acquisition of Rekognition and currently use the service. Requested on February 6, 2019.
Information on whether Amazon Rekognition is currently integrated with any police body-camera technology or existing public-facing camera networks. Requested on February 6, 2019; and February 27, 2019.
Clarification on whether the training dataset (rather than subsequent calibration sets) skewed white, or whether it was primed to recognize white faces. Requested on February 6, 2019; and February 27, 2019.
Answers regarding reports that Amazon is marketing this technology to ICE. Requested on February 6, 2019; and February 27, 2019.
Corporations have been quick to share expressions of support for the Black Lives Matter movement following the public outrage over the murders of Black Americans like George Floyd at the hands of police. Unfortunately, too many of these gestures have been performative at best. Calling on Congress to regulate facial recognition technology is one of these gestures. However, Amazon – as a global leader in technology and innovation – has a unique opportunity before them to put substantive action behind their sentiments of "solidarity with the Black community" by not selling a flawed product to police, and instead, play a critical role in ending systemic racism in our nation's criminal justice system.
Thank you for your attention to this important matter. I look forward to your responses on this issue.
Member of Congress
- L.A. Congressman Slams Amazon's Facial Recognition Technology ... ›
- RealNetworks facial recognition tech IDs celebrities in videos — and ... ›
- The definitive account of Amazon's perilous ambition: Key scenes ... ›