The LAPD Spends Millions on Spy Tech. Here’s What They’re Buying

Amrita Khalid
Amrita Khalid is a tech journalist based in Los Angeles, and has written for Quartz, The Daily Dot, Engadget, Inc. Magazine and number of other publications. She got her start in Washington, D.C., covering Congress for CQ-Roll Call. You can send tips or pitches to amrita@dot.la or reach out to her on Twitter at @askhalid.
police officer surrounded by drones and security cameras ​
Andria Moore

Over the past six years, the LAPD spent millions in FEMA funds on automated license plate readers, predictive policing software and other spy tech, according to a new report. Authored by Action Center on Race and Economy (ACRE), the report focused on a counter-terrorism grant program under FEMA known as the Urban Area Security Initiative (UASI). First created in 2003, the UASI was designed to help the largest cities beef up their emergency preparedness agencies and prevent acts of domestic terrorism.


According to a mayor’s report from January 2021, the city of Los Angeles received roughly $20.5 million in UASI grants. Of that amount, approximately half (or $10 million) went to the LAPD. Notably, the amount was only a drop in the bucket of LAPD’s total $1.7 billion budget for the fiscal year 2020 (which was cut by $150 million in response to the movement to defund the police). For fiscal years 2022-2023, the L.A. City Council approved a $1.9 billion operating budget for the city’s police.

While local police departments receiving federal money is nothing new, critics say the existence of such funds gives the LAPD more freedom to invest in invasive technologies. LAPD recently came under scrutiny for its use of facial recognition technology. Earlier this month, an inspector’s general report revealed that the LAPD’s use of facial recognition software only resulted in a positive match about 55% of the time and the department didn’t track incidents where matches led to a misidentification of a suspect.

Automatic License Plate Readers (ALPR)

Between 2016 and 2020, the LAPD purchased at least $1.27 million worth of ALPR readers per report. ALPRs have increasingly come under increased scrutiny for their high error rates and risks to privacy. The high-speed cameras scan images of nearby license plates and alert police officers to stolen vehicles or people wanted for a crime.

One 2020 audit found that the LAPD and three other police departments were collecting massive amounts of data on drivers and their movements — but weren’t doing enough to protect privacy. According to the audit, of the 320 million images that the LAPD had stored in 2020, roughly 99.9% were unrelated to criminal investigations.

Palantir Data Fusion Platforms

Palantir, a controversial software company that has faced criticism for enabling government surveillance, has provided predictive policing software to the LAPD since 2011. The company’s platform can identify criminal “hotspots” by analyzing license plate photos, police reports, gang databases, regional crime reports and other data.

The exact amount of money the company has received from UASI is unclear. Public records requests, however, estimate that the LAPD spent over $20 million on Palantir software between 2009 and 2018.

It’s important to not that such tools have disproportionately targeted low-income individuals, people of color and unhoused people. Last year more than 1,400 mathematicians signed on to a letter criticizing predictive policing for its racial biases in the trade journal Notices of the American Mathematical Society (AMS). In 2019, PredPol (a predictive policing tool once used by the LAPD) faced criticism from mathematicians for using flawed algorithms that created feedback loops.

“If you build predictive policing, you are essentially sending police to certain neighborhoods based on what what they told you—but that also means you’re not sending police to other neighborhoods because the system didn’t tell you to go there,” University of Utah computing professor Suresh Venkatasubramanian toldMotherboard.

Radio Systems

Between 2016 and 2020, the LAPD spent roughly $24 million to upgrade their radio communications network through Motorola. As the Los Angeles Times reported in 2007, the department’s two-way portable radio system was often unreliable and had been in need of an upgrade for years. Some officers even resorted to using their cell phones for field communications.

Critics noted that radio systems have allowed police to avoid public oversight in light of many cities encrypting their scanners in recent years. Furthermore, since 2020, many cities including Santa Monica, Santa Cruz, San Diego and others (though not Los Angeles) have opted to take their radio communications private in order to comply with a DOJ directive to protect private information. But critics warn that such a move prevents the media and the public from keeping track of criminal activity or public safety developments during natural disasters.

Cell Site Simulators

The LAPD also spent $630,000 of 2020 UASI funding on cell-site simulators — devices that look like cell towers that allow police to pinpoint the location of a specific smartphone. Cell site simulators can identify the unique IMSI number (international mobile subscriber identity) attached to every mobile device.

Also known as Stingrays or IMSI catchers, the devices can collect the content of SMS messages, voice calls and any websites visited. IMSI catchers essentially trick nearby mobile devices into connecting with them and then collect the data sent from the device, including its location and other personal data. Some devices even have the ability to intercept text messages and phone calls.

Cell site simulators are in ample use in California and and in major police departments throughout the country, including in cities like Chicago, Boston and New York City. Critics of cell site simulators say they function as dragnet surveillance tools — essentially capturing data from bystanders—and also can potentially interfere with 911 calls.

Social Media Surveillance

Skopenow — a social media monitoring company — counts the LAPD as one of its customers, along with the U.S. Secret Service, the U.S. Postal Inspection Service and Broward County, Florida. Last year, the software led to the arrests of three middle school students in Florida after police found threats they made on social media.

According to the company’s website, Skopenow functions as a sort of “analytical search engine” for social media. It claims it can inform customers when criminals post content related to drugs, weapons or stolen items. It also lets users easily view a person of interest’s mutual friends, shared vehicles, employment histories and business affiliations.

Since the ACRE report’s analysis doesn’t go past fiscal year 2020, it doesn’t capture recent developments in LAPD’s use of surveillance. In August, the L.A. Police Commission adopted rules that will require the LAPD to submit detailed proposals before acquiring new technology. It also needs to disclose which data will be collected on people and for how long it will be kept.

While such reform is definitely a start, critics point out that similar policies on facial recognition haven’t reigned in police abuse and have instead served as a cover.

https://twitter.com/askhalid

Subscribe to our newsletter to catch every headline.

AI Is Rapidly Advancing, but the Question Is, Can We Keep Up?

Lon Harris
Lon Harris is a contributor to dot.LA. His work has also appeared on ScreenJunkies, RottenTomatoes and Inside Streaming.
AI Is Rapidly Advancing, but the Question Is, Can We Keep Up?
Evan Xie

One way to measure just how white-hot AI development has become: the world is running out of the advanced graphics chips necessary to power AI programs. While Intel central processing units were once the most sought-after industry leaders, advanced graphics chips like Nvidia’s are designed to run multiple computations simultaneously, a baseline necessity for many AI models.

An early version of ChatGPT required around 10,000 graphics chips to run. By some estimates, newer updates require 3-5 times that amount of processing power. As a result of this skyrocketing demand, shares of Nvidia have jumped 165% so far this year.

Building on this momentum, this week, Nvidia revealed a line-up of new AI-related projects including an Israeli supercomputer project and a platform utilizing AI to help video game developers. For smaller companies and startups, however, getting access to the vital underlying technology that powers AI development is already becoming less about meritocracy and more about “who you know.” According to the Wall Street Journal, Elon Musk scooped up a valuable share of server space from Oracle this year before anyone else got a crack at it for his new OpenAI rival, X.AI.

The massive demand for Nvidia-style chips has also created a lucrative secondary market, where smaller companies and startups are often outbid by larger and more established rivals. One startup founder compares the fevered crush of the current chip marketplace to toilet paper in the early days of the pandemic. For those companies that don’t get access to the most powerful chips or enough server space in the cloud, often the only remaining option is to simplify their AI models, so they can run more efficiently.

Beyond just the design of new AI products, we’re also at a key moment for users and consumers, who are still figuring out what sorts of applications are ideal for AI and which ones are less effective, or potentially even unethical or dangerous. There’s now mounting evidence that the hype around some of these AI tools is reaching a lot further than the warnings about its drawbacks.

JP Morgan Chase is training a new AI chatbot to help customers choose financial securities and stocks, known as IndexGPT. For now, they insist that it’s purely supplemental, designed to advise and not replace money managers, but it may just be a matter of time before job losses begin to hit financial planners along with everyone else.

A lawyer in New York just this week was busted by a judge for using ChatGPT as part of his background research. When questioned by the judge, lawyer Peter LoDuco revealed that he’d farmed out some research to a colleague, Steven A. Schwartz, who had consulted with ChatGPT on the case. Schwartz was apparently unaware that the AI chatbot was able to lie – transcripts even show him questioning ChatGPT’s responses and the bot assuring him that these were, in fact, real cases and citations.

New research by Marucie Jakesch, a doctoral student from Cornell University, suggests that even users who are more aware than Schwartz about how AI works and its limitations may still be impacted in subtle and subconscious ways by its output.

Not to mention, according to data from Intelligent.com, high school and college students already – on the whole – prefer utilizing ChatGPT for help with schoolwork over a human tutor. The survey also notes that advanced students tend to report getting more out of using ChatGPT-type programs than beginners, likely because they have more baseline knowledge and can construct better and more informative prompts.

But therein lies the big drawback to using ChatGPT and other AI tools for education. At least so far, they’re reliant on the end user writing good prompts and having some sense about how to organize a lesson plan for themselves. Human tutors, on the other hand, have a lot of personal experience in these kinds of areas. Someone who instructs others in foreign languages professionally probably has a good inherent sense of when you need to focus on expanding your vocabulary vs. drilling certain kinds of verb and tense conjugations. They’ve helped many other students prepare for tests, quizzes, and real-world challenges, while computer software can only guess at what kinds of scenarios its proteges will face.

A recent Forbes editorial by academic Thomas Davenport suggests that, while AI is getting all the hype right now, other forms of computing or machine learning are still going to be more effective for a lot of basic tasks. From a marketing perspective in 2023, it’s helpful for a tech company to throw the “AI” brand around, but it’s not magically going to be the answer for every problem.

Davenport points to a similar (if smaller) whirlwind of excitement around IBM’s “Watson” in the early 2010s, when it was famously able to take out human “Jeopardy!’ champions. It turns out, Watson was a general knowledge engine, really best suited for jobs like playing “Jeopardy.” But after the software gained celebrity status, people tried to use it for all sorts of advanced applications, like designing cancer drugs or providing investment advice. Today, few people turn to Watson for these kinds of solutions. It’s just the wrong tool for the job. In that same way, Davenport suggests that generative AI is in danger of being misapplied.

While the industry and end users both race to solve the AI puzzle in real time, governments are also feeling pressure to step in and potentially regulate the AI industry. This is much easier said than done, though, as politicians face the same kinds of questions and uncertainty as everyone else.

OpenAI CEO Sam Altman has been calling for governments to begin regulating AI, but just this week, he suggested that the company might pull out of the European Union entirely if the regulations were too onerous. Specifically, Altman worries that attempts to narrow what kinds of data can be used to train AI systems – specifically blocking copyrighted material – might well prove impossible. “If we can comply, we will, and if we can’t, we’ll cease operating,” Altman told Time. “We will try, but there are technical limits to what’s possible.” (Altman has already started walking this threat back, suggesting he has no immediate plans to exit the EU.)

In the US, The White House has been working on a “Blueprint for an AI Bill of Rights,” but it’s non-binding, just a collection of largely vague suggestions. It’s one thing to agree “consumers shouldn’t face discrimination from an algorithm” and “everyone should be protected from abusive data practices and have agency over how their data is used.” But enforcement is an entirely different animal. A lot of these issues already exist in tech, and are much larger than AI, and the US government already doesn’t do much about them.

Additionally, it’s possible AI regulations won’t work well at all if they aren’t global. Even if you set some policies and get an entire nation’s government to agree, how to set similar worldwide protocols? What if US and Europe agree but India doesn’t? Everyone around the world accesses roughly the same internet, so without any kind of international standard, it’s going to be much harder for individual nations to enforce specific rules. As with so many other AI developments, there’s inherent danger in patchwork regulations; it could allow some companies, or regions, or players to move forward while others are unfairly or ineffectively stymied or held back.

The same kinds of socio-economic concerns around AI that we have nationally – some sectors of the work force left behind, the wealthiest and most established players coming in to the new market with massive advantages, the rapid spread of misinformation – are all, in actuality, global concerns. Just as the hegemony of Microsoft and Google threaten the ability of new players to enter the AI space, the West’s early dominance of AI tech threatens to push out companies and innovations from emerging markets like Southeast Asia, Subsaharan Africa, and Central America. Left unfettered, AI could potentially deepen social, economic, and digital divisions both within and between all of these societies.

Undaunted, some governments aren’t waiting around for these tools to develop any further before they start attempting to regulate them. New York City has already set up some rules about how AI can be used during the hiring process while will take effect in July. The law requires any company using AI software in hiring to notify candidates that it’s being used, and to have independent auditors check the system annually for bias.

This sort of piecemeal figure-it-out-as-we-go approach is probably what’s going to be necessary, at least short-term, as AI development shows zero signs of slowing down or stopping any time soon. Though there’s some disagreement among experts, most analysts agree with Wharton professor and economist Jeremy Siegel, who told CNBC this week that AI is not yet a bubble. He pointed to the Nvidia earnings as a sign the market remains healthy and not overly frothy. So, at least for now, the feverish excitement around AI is not going to burst like a late ‘90s startup stock. The world needs to prepare as if this technology is going to be with us for a while.

Rivian CEO Teases R2, New Features in Instagram AMA

David Shultz

David Shultz reports on clean technology and electric vehicles, among other industries, for dot.LA. His writing has appeared in The Atlantic, Outside, Nautilus and many other publications.

Rivian CEO Teases R2, New Features in Instagram AMA
Rivian

Rivian CEO RJ Scaringe took to Instagram last weekend to answer questions from the public about his company and its future. Topics covered included new colors, sustainability, production ramp, new products and features. Speaking of which, viewers also got a first look at the company’s much-anticipated R2 platform, albeit made of clay and covered by a sheet, but hey, that’s…something. If you don’t want to watch the whole 33 minute video, which is now also on Youtube, we’ve got the highlights for you.

Read moreShow less

From AI to Layoffs, Here's Why College Grads No Longer Want Tech Jobs

Lon Harris
Lon Harris is a contributor to dot.LA. His work has also appeared on ScreenJunkies, RottenTomatoes and Inside Streaming.
From AI to Layoffs, Here's Why College Grads No Longer Want Tech Jobs
Evan Xie

A new report in Bloomberg suggests that younger workers and college graduates are moving away from tech as the preferred industry in which to embark on their careers. While big tech companies and startups once promised skilled young workers not just the opportunity to develop cutting-edge, exciting products, but also perks and – for the most talented and ambitious newcomers – a relatively reliable path to wealth. (Who could forget the tales of overnight Facebook millionaires that fueled the previous dot com explosion? There were even movies about it!)

Read moreShow less
RELATEDEDITOR'S PICKS
LA TECH JOBS
interchangeLA
Trending