Get in the KNOW
on LA Startups & TechX
The LAPD is facing criticism from privacy groups after SweepWizard — an app it used to conduct multi-agency raids last fall — exposed the personal data of thousands of suspects and details of police operations. Earlier this month Wired reported that LAPD and the regional Internet Crimes Against Children (ICAC) Task Force used a free trial version of the app to conduct multi-agency raids on sex offenders. While the app led to the arrest of 141 suspects, it also revealed sensitive details on police operations that could have put the entire mission at risk.
“LAPD has a long history of using free trials of surveillance technologies to experiment on our communities, especially in Skid Row and other Black and brown neighborhoods,” said Tiff Guerra, a researcher with the Stop LAPD Spying Coalition.
In 2020, the Los Angeles Times reported that the LAPD ended its relationship with controversial predictive policing program PredPol. Though Los Angeles Police Chief Michel Moore claimed the decision was due to financial constraints, community activists argued that Moore’s decision only came after they pressured the department to discontinue the program.
Guerra also cited the police department’s use of social media surveillance tools like Dataminr and Skopenow, which officers use to regularly monitor the online activity of suspects as well as search for certain terms.
“Every day, LAPD generates and collects sensitive data about our locations, associations, appearance, interests, and relationships. LAPD tracks and maintains this data regardless of whether a person is suspected of any crime,” they said.
Guerra added that ODIN Intelligence — the company behind SweepWizard — sells another app that uses gait and facial recognition to track homeless communities. Known as the Homeless Management Information System, or HMIS, the app lets users create a profile for each unhoused resident that includes their photo and personal data, including prior arrests, temporary housing history and contact information for family members and parole officers. The app’s facial recognition feature allows police to identify unhoused persons by searching for a match in the larger database.
None of this is particularly new. Law enforcement agencies have regularly used third-party surveillance tech — such as location-tracking and social media monitoring apps — to spy on suspects. But digital privacy groups say that the marketplace for such police tech faces few restrictions. “There are many privacy and safety harms caused by police using third party surveillance technologies including wrongful arrests, over-policing, and data breaches like this one. The marketplace for police tech is unregulated and full of poorly designed and poorly implemented software,” said Jake Wiener, counsel at the Electronic Privacy Information Center (EPIC).
Wiener also stressed that the police shouldn’t have used a free trial version of an app on “live cases”, as the LAPD did with SweepWizard. “Unfortunately, free trials are a common sales technique in the police tech market, and often lead to individual officers using tech without the knowledge or approval of supervisors,” said Wiener.
ODIN Intelligence — the company that developed SweepWizard and other police surveillance apps — regularly held a “Sex Offender Supervision Officer Bootcamp” to train officers on how to use the SweepWizard app. Screenshots of the app posted on the bootcamp’s website indicate that the third-party app allowed police to input personal details on targets, including address, date of birth, social security number and a photo. The app also includes a section for officers, which allows the user to assign specific targets to officers by name.
Wiener said that storing sensitive information such as addresses, identifying details and social security numbers is a “substantial privacy risk” and likely isn’t necessary to make the app work.
Following the Wired story’s publication, TechCrunch reported that an unidentified hacker on Sunday exfiltrated data from the website for ODIN Intelligence. The hackers defaced the website, leaving behind a message that claimed that “all data and backups” on the company’s servers had been shredded. The company’s website was quickly taken offline, and remains so as of Wednesday afternoon.
The LAPD is currently investigating what caused the SweepWizard breach, and has suspended use of the app. dot.LA has reached out to the LAPD for an update on its investigation, but has yet to hear back.
- Anduril Industries Is Getting Hundreds of Millions to Build Border Surveillance Tech ›
- Inside the First-Of-Its-Kind TikTok Task Force To Stop Kia Theft ›
- The LAPD Spends Millions on Spy Tech. Here’s What They’re Buying ›
The LAPD Spends Millions on Spy Tech. Here’s What They’re Buying
Over the past six years, the LAPD spent millions in FEMA funds on automated license plate readers, predictive policing software and other spy tech, according to a new report. Authored by Action Center on Race and Economy (ACRE), the report focused on a counter-terrorism grant program under FEMA known as the Urban Area Security Initiative (UASI). First created in 2003, the UASI was designed to help the largest cities beef up their emergency preparedness agencies and prevent acts of domestic terrorism.
According to a mayor’s report from January 2021, the city of Los Angeles received roughly $20.5 million in UASI grants. Of that amount, approximately half (or $10 million) went to the LAPD. Notably, the amount was only a drop in the bucket of LAPD’s total $1.7 billion budget for the fiscal year 2020 (which was cut by $150 million in response to the movement to defund the police). For fiscal years 2022-2023, the L.A. City Council approved a $1.9 billion operating budget for the city’s police.
While local police departments receiving federal money is nothing new, critics say the existence of such funds gives the LAPD more freedom to invest in invasive technologies. LAPD recently came under scrutiny for its use of facial recognition technology. Earlier this month, an inspector’s general report revealed that the LAPD’s use of facial recognition software only resulted in a positive match about 55% of the time and the department didn’t track incidents where matches led to a misidentification of a suspect.
Automatic License Plate Readers (ALPR)
Between 2016 and 2020, the LAPD purchased at least $1.27 million worth of ALPR readers per report. ALPRs have increasingly come under increased scrutiny for their high error rates and risks to privacy. The high-speed cameras scan images of nearby license plates and alert police officers to stolen vehicles or people wanted for a crime.
One 2020 audit found that the LAPD and three other police departments were collecting massive amounts of data on drivers and their movements — but weren’t doing enough to protect privacy. According to the audit, of the 320 million images that the LAPD had stored in 2020, roughly 99.9% were unrelated to criminal investigations.
Palantir Data Fusion Platforms
Palantir, a controversial software company that has faced criticism for enabling government surveillance, has provided predictive policing software to the LAPD since 2011. The company’s platform can identify criminal “hotspots” by analyzing license plate photos, police reports, gang databases, regional crime reports and other data.
The exact amount of money the company has received from UASI is unclear. Public records requests, however, estimate that the LAPD spent over $20 million on Palantir software between 2009 and 2018.
It’s important to not that such tools have disproportionately targeted low-income individuals, people of color and unhoused people. Last year more than 1,400 mathematicians signed on to a letter criticizing predictive policing for its racial biases in the trade journal Notices of the American Mathematical Society (AMS). In 2019, PredPol (a predictive policing tool once used by the LAPD) faced criticism from mathematicians for using flawed algorithms that created feedback loops.
“If you build predictive policing, you are essentially sending police to certain neighborhoods based on what what they told you—but that also means you’re not sending police to other neighborhoods because the system didn’t tell you to go there,” University of Utah computing professor Suresh Venkatasubramanian toldMotherboard.
Between 2016 and 2020, the LAPD spent roughly $24 million to upgrade their radio communications network through Motorola. As the Los Angeles Times reported in 2007, the department’s two-way portable radio system was often unreliable and had been in need of an upgrade for years. Some officers even resorted to using their cell phones for field communications.
Critics noted that radio systems have allowed police to avoid public oversight in light of many cities encrypting their scanners in recent years. Furthermore, since 2020, many cities including Santa Monica, Santa Cruz, San Diego and others (though not Los Angeles) have opted to take their radio communications private in order to comply with a DOJ directive to protect private information. But critics warn that such a move prevents the media and the public from keeping track of criminal activity or public safety developments during natural disasters.
Cell Site Simulators
The LAPD also spent $630,000 of 2020 UASI funding on cell-site simulators — devices that look like cell towers that allow police to pinpoint the location of a specific smartphone. Cell site simulators can identify the unique IMSI number (international mobile subscriber identity) attached to every mobile device.
Also known as Stingrays or IMSI catchers, the devices can collect the content of SMS messages, voice calls and any websites visited. IMSI catchers essentially trick nearby mobile devices into connecting with them and then collect the data sent from the device, including its location and other personal data. Some devices even have the ability to intercept text messages and phone calls.
Cell site simulators are in ample use in California and and in major police departments throughout the country, including in cities like Chicago, Boston and New York City. Critics of cell site simulators say they function as dragnet surveillance tools — essentially capturing data from bystanders—and also can potentially interfere with 911 calls.
Social Media Surveillance
Skopenow — a social media monitoring company — counts the LAPD as one of its customers, along with the U.S. Secret Service, the U.S. Postal Inspection Service and Broward County, Florida. Last year, the software led to the arrests of three middle school students in Florida after police found threats they made on social media.
According to the company’s website, Skopenow functions as a sort of “analytical search engine” for social media. It claims it can inform customers when criminals post content related to drugs, weapons or stolen items. It also lets users easily view a person of interest’s mutual friends, shared vehicles, employment histories and business affiliations.
Since the ACRE report’s analysis doesn’t go past fiscal year 2020, it doesn’t capture recent developments in LAPD’s use of surveillance. In August, the L.A. Police Commission adopted rules that will require the LAPD to submit detailed proposals before acquiring new technology. It also needs to disclose which data will be collected on people and for how long it will be kept.
While such reform is definitely a start, critics point out that similar policies on facial recognition haven’t reigned in police abuse and have instead served as a cover.
- After Dozens of Wrongful Arrests, a New Bill Is Cracking Down on Facial Recognition Tech for Law Enforcement ›
- In LA, the Fight Over Facial Recognition Tech Is Just Heating Up ›
- Inside the First-Of-Its-Kind TikTok Task Force To Stop Kia Theft ›
- LAPD Faces Criticism After Using SweepWizard - dot.LA ›
In LA, the Fight Over Facial Recognition Tech Is Just Heating Up
In Los Angeles, the cameras are everywhere. Cameras at traffic lights. Cameras on doorbells. Cameras on billions of smartphones. When your photo is snapped by these cameras, facial recognition technology can match your face to a database of millions of mug shots, potentially linking you to a crime.
Is this legal? Is this fair? Is this right?
These questions loom large over the technology, which the Los Angeles Police Department has been using since 2009. In November, an investigation by BuzzFeed News found that the LAPD had used the tech 30,000 times in the last decade, including using the controversial "Clearview AI," which trawls the internet for social media photos. Activists, furious over the investigation's findings, sought a ban on the tech. In January, the LAPD adopted what's effectively a "compromise" policy that prohibited the use of Clearview AI and other third party databases of photos, but allowed them to use Facial Recognition Technology (FRT) with their own in-house database of mugshots.
Flash forward six months. After road-testing the system, the LAPD said it's an effective tool that's being used with restraint, rapidly speeding up the time it takes to scroll through mug shots and helping to catch crooks. Activists say it should be forbidden, and that it disproportionately impacts communities of color.
"You have to look at the broader context, and where it fits in the broader 'stalker state,'" said Hamid Khan, founder of the Stop LAPD Spying Coalition. "This is not a moment in time, but a continuation of history."
The roots of the "stalker state," according to Khan, go back to the Lantern Laws of the 18th century, when Black people were required to carry lanterns after dark. Since then, we've seen a number of policies that have disproportionately targeted Black and Latinos, ranging from New York City's "stop and frisk" to the Department of Homeland Security's more recent "Suspicious Activity Reporting" program (a partnership between federal and local law enforcement), which allows anyone to report perceived sketchy behavior to the authorities. One audit found that Black people were reported in 21% of these "suspicious activities," even though they only represent 8% of Los Angeles County.
Activists worry FRT takes a pattern of discrimination and merges it with the brutal efficacy of surveillance tech.
"The danger now is that you're going to subject certain neighborhoods, certain people, and certain religious groups to this constant ever-present surveillance," said John Raphling, a senior researcher on criminal justice for Human Rights Watch. Raphling said that the Fourth Amendment, as established in 1979's Supreme Court case Brown v. Texas, means that the police can't simply waltz up to you and demand to see your ID for no reason.
"With FRT technology, that's out the window," said Raphling. "You're being identified at all times — who you are, what you're doing, who you're associating with." His concern is not just FRT itself, but the broader apparatus of sophisticated law enforcement – predictive analytics and data crunching from the photos, as now "you can't go out in public life without being under this surveillance."
The tech has been accused of racial bias, as research suggests the algorithms powering facial recognition lead to a higher chance of false matches for minorities and women. In one cheeky experiment, the ACLU used Amazon's facial recognition software ("Rekognition," which is not the software used by the LAPD) to compare the headshots of Congress with a database of mugshots, and they found that a whopping 39% of the false matches came from representatives who were people of color, even though they constitute just 20% of Congress.
The technology employed by the LAPD ignores pigmentation, according to an officer who oversees it, instead digitally mapping the face by looking at things like the distance between the eyes, or the distance from the nose to mouth.Shutterstock
Bita Amani, part of the Center for the Study of Racism, Social Justice, and Health, adds that constant surveillance likely poses an underappreciated health risk to marginalized communities, and that even if the facial recognition is flawless and accurate, it's just "strengthening and expanding the powers of the system that already targets the Black and the poor, and the people at the margins."
The police, of course, see all of this quite differently.
"This is not a sole identification tool. Ever," said Captain Christopher Zine of the LAPD. "This is basically a digital mug book." In the old days, you'd need to flip through stacks of photos and try to eyeball a match. It's slow. It's tedious. Now the system takes a photo and then queries it against the database Los Angeles County Regional Identification System database (LACRIS), which contains 7 million photos from 4 million people. (The LAPD clarified that the photos come from decades of arrests, and include non-L.A. residents.)
Lieutenant Derek Sabatini heads up the LACRIS system. He is well aware of the concerns over bias, but suggested that facial recognition technology, in a certain sense, can be employed to reduce the role of implicit bias. If humans do indeed harbor implicit biases, maybe tech can help inject objectivity?
In the traditional use of a photo, said Lt. Sabatini, "you might look at a male Hispanic and then filter that search" based on race or gender. But the FRT works differently. (The department prefers the term "PCT", for Photo Comparison Technology.) Sabatini said that the PCT employed by the LAPD ignores pigmentation, and instead digitally maps the face by looking at things like the distance between the eyes, or the distance from the nose to mouth.
Sabatini gives an example. One time the cops were trying to catch someone who was stealing packages off porches. They had a photo of a tattooed individual, and just from a casual glance, it appeared to be an Hispanic man. When they zapped the photo through the database, it was found to be an Hispanic woman, whom they arrested and charged in court. Sabatini said the facial recognition technology "actually takes away any bias in the user and just kind of goes, 'here's what's best, based on what you're providing me.'"
Some of the tension — and apprehension — seems to be a conflation between what's possible and what is actually being done. The activists fear the worst ("look at the history of the criminal justice system," said Khan) and the cops insist they are following a reasonable protocol.
"One of the big misconceptions is surveillance," said Sabanti, who explains that live feeds (such as continuous footage from an elevator camera) are not being dumped into the LAPD's records and then later mined for algorithmic dark sorcery. "You can't just have live feeds going through a system," he said. "We don't have the capability of that, and it would be against the law."
The department is also forbidden from using third-party photo databases or tools like Clearview AI. Every photo needs to be legally obtained, and to help solve a crime.
Captain Zine said that since the January protocols were enacted, the department created additional processes to ensure that only their own LACRIS database is being used, that extensive training is in place, and that only a small subset of the LAPD even has access to the tool. As for any official numbers, or quantified results and updates? This is still TBD. Zine said the LAPD is still conducting an internal review of FRT's effectiveness, and declined to provide numbers before that's finished (which he expects will be in September).
Critics like Khan, Raphling and Amani think that this middle ground is not enough, and that the potential for abuse — and the troubling history of discrimination — is itself reason enough to ban the tech. Khan points to reports that the LAPD sought photos from Ring doorbell cameras during the Black Lives Matter protests, as well as a high-profile false arrest in Detroit, although he is not aware of any specific abuses of the system, or examples of discrimination or misuse since the January protocol went into effect. The concerns seem to be more about the lurking threat of the ever-more-powerful "Stalker State" technology, as opposed to the more narrow use of the "digital mug book."
Others remain deeply skeptical. "Their argument is 'just trust us,'" said Raphling, arguing that law enforcement has a history of saying "we use it in this very minimal way," but that "it turns out they were using it vastly more." He added, more bluntly, "we would be suckers to trust them again."
Sabanti said he understands the broader concerns around a creepy, "Black Mirror"-esque surveillance state. "That stuff scares us as much as it scares the public. I don't want that," he said with a laugh. "I think we're all on the same team, and people forget that."
Lead image by Ian Hurley.
Correction: An earlier version of this post mis-spelled Hamid Khan's name.
- How Can L.A. Tech Promote More Diversity in Its Ranks? - dot.LA ›
- Unarmed CEO Tony Rice II Developed His Startup - dot.LA ›
- New Tech At LAX Aims To Speed Check-Ins, Keep Flyers Safe - dot.LA ›
- The LAPD Spends Millions On Spy Tech — Here's Why - dot.LA ›
- TikTok Tests auto scroll Feature Amid Growing Concerns Over App's Impact on Young Users - dot.LA ›