There are a lot of companies that offer online lessons for students, but Numerade is betting its combination of tutor-driven service fueled by AI can break through the noise.
Numerade's subscription service boasts a database of over 1.2 million videos covering content for middle school, high school and college-level STEM courses, all made by its certified tutors. Using AI, the company lets users search the database to generate a customized lesson and quizzes for students.
The Los Angeles-based company announced a $26 million round of Series A funding this week led by IDG Capital and including General Catalyst, Mucker Capital and Kapor Capital.
More than 30,000 users are certified tutors on the platform to date, according to the company. Numerade charges users $9.99 a month or $83 annually.
Numerade isn't the only edtech company using AI to help the learning process. Companies like Quizlet have AI tools that generate step-by-step solutions for students. But unlike those platforms, Numerade uses video, which it sees as more digestible and interactive than text.
"By the end of the video, whether it's a minute long or six minutes long, the student will learn the process of how the educator got from point A to point Z," said Jonathan Gupta-Buckley, Numerade's vice president of growth. "So we don't give away the answer, we ensure that [students are] learning and growing while they're on our platform."
Investors have flocked to edtech during the pandemic, as forced school closures accelerated the adoption of digital technologies in the classroom.
Numerade will have to compete against existing video-based learning platforms such as Khan Academy, which already offer courses covering middle school to collegiate-level STEM courses.
But Numerade sees itself as a service that will open up the experience of private tutoring to a broader demographic, at a much cheaper price.
"The cost of private tutoring is absolutely exorbitant," Gupta-Buckley said. "By providing a service whereby a student can learn, grow and build their builder skill set within STEM at a fraction of the cost is an invaluable service."
Despite the return to in-person schooling, Numerade sees itself as a compliment to a student's normal course load.
"This pandemic opened people up to the idea of hybrid learning," Gupta-Buckley said. "So I think we will be seen as a complimentary platform to in-class education."
Venture capitalists last month sunk nearly half a billion dollars into a Southern California defense technology startup whose surveillance towers track migrants along the U.S.-Mexico border.
Anduril Industries, the Irvine-based maker of autonomous drones, towers and small ground sensors, will use the $450 million for acquisitions and build out its AI-powered tech designed for military and border enforcement agencies.
But activists and experts are raising flags about the technology, pointing to privacy violations and civil liberties infringements.
They also question the government's steep investment in the private defense contractors behind it.
"The fact that we're spending money on the border wall also means that we're not investing in the things we all actually need here in the valley," said Norma Herrera, an organizer with the Rio Grande Valley Equal Voice Network.
She pushes back against what President Biden called an "effective and modern border security" system—a bureaucratic apparatus that allocates $1.2 billion for border infrastructure next year (still a drop in the bucket, given the Department of Homeland Security's $52 billion 2022 budget).
Before the pandemic, Herrera knocked on doors in Texas' Starr County to tell residents about the amount of money elected officials were pouring into Trump's border wall. Now, she's learning how to explain the virtual wall, one that's often harder to notice.
Anduril declined to make executives at the company available for interviews.
Surveillance on the Border
Over the last decade, the border security and immigration detention industry has ballooned as Democrats and Republicans both funnel more government money into private companies. Between the fiscal years 2017 and 2020, Customs and Border Protection received about $743 million from Congress for tech and surveillance, according to the legal organization Just Futures Law. And in the 2021 fiscal year alone, the Department of Homeland Security received over $780 million for the same purpose.
Anduril's recent project with CBP revolves around a $250 million contract signed under the Trump administration in July of 2020 to set up 200 solar-powered watch towers along the southern border. Of the towers, 60 are up and running as of July 2.
Under Biden's leadership, funding for border technology has become an even bigger priority, said Dinesh McCoy, a legal fellow at Just Futures Law.
"It's in large part a response to coinciding pressures of distinguishing themselves from the Trump years," he said.
Many Democrats back Biden's vision, considering a virtual barrier a far better alternative to the physical border wall Republicans prefer.
"When it comes to proposals for a virtual wall, we're talking about heavy, heavy investments," said Saira Hussain, an attorney at the Electronic Frontier Foundation who specializes in racial and immigrant justice, surveillance and technology.
Government agencies are tapping a number of private companies to install the technology. In 2019, CBP awarded the Israeli defense contractor Elbit Systems $26 million to install surveillance towers along the border.
Then came the administration's 2020 deal with Anduril. Its AI-powered operating system, called Lattice, is designed to distinguish humans from animals along the border and send information to an agent's cell phone. The company has to date received $691 million in venture capital, including $450 million that had backers including Andreessen Horowitz last month. Anduril is now valued at $4.6 billion.
"As with all of our investments, this is a bet not just on the technology (breathtaking) and the market (enormous) but also the people (outstanding)," Andreessen Horowitz co-founder and general partner Marc Andreessen said in a prepared statement.
Marc Andreessen is a longtime investor in Palmer Luckey, Anduril's 28-year-old founder. He backed Luckey's first company — virtual reality startup Oculus — before Facebook bought it for $2 billion in 2014. A few years later, Luckey left following reports that he was funding a far-right political group.
In 2017, Luckey opened Anduril with a band of former employees from Oculus VR and Palantir, the software giant with major contracts with several government agencies.
Anduril Border Tower
Along the border, Anduril's 33-foot towers are continuously scanning plots of land about three miles in diameter. They're built to ignore animals — what CBP calls a "false positive" — and light up after detecting movement from people or cars.
The towers are watching "illegal border crossings, human trafficking and drug smuggling," a spokesperson for Anduril said by email.
If a person or group falls out of the camera's vision, AI tells the next tower to pick it back up. Border patrol agents then receive an alert to their cell phones or computers.
The goal is to mimic an agent's pair of eyes, especially in remote and rural spots. As one agent put it, "they see what we can't see on the ground."
They also run on solar power, a feature CBP said avoids the need for new infrastructure that can "complicate the Border Patrol's agreements with many of the private ranchland owners, national parks, and Native Americans' tribal lands where the Border Patrol must work."
Video surveillance drones and towers are puncturing nearly every industry, from homeland security to fast food delivery to monitoring traffic and parking violations along busy streets.
The tech is also raising a flood of questions from academics and legal groups like the Electronic Frontier Foundation and Just Futures Law, all of them worried about the implications of surveillance not only for migrants, but for U.S. residents. In May of 2020, for example, agencies CBP flew a drone over Minneapolis to record protestors following the police murder of George Floyd.
"We know that what's often deployed at the border and what's normalized at the border in terms of surveillance eventually makes its way into the interior of the United States," said Hussain, the attorney from EFF.
The company says it does not use facial recognition or collect identifiable information.
But critics like the ACLU of Texas and other civil liberties groups said it's unclear what data is being collected by private defense contracts like Anduril and how it could be used and shared.
"The border is a testing ground for surveillance elsewhere," said McCoy, the legal fellow at Just Futures Law. "Unfortunately, it's been primarily used to surveill Black and brown folks in the U.S. and abroad."
As the U.S. begins reducing its military footprint in the Middle East, McCoy suspects other military contractors will turn to border surveillance as a new form of profit.
"These tools that were once confined to military contexts have found themselves more and more in local communities," he said.
Anduril, for its part, insists it is providing the government with a crucial security mechanism. "Anduril identifies a security problem," reads a prepared statement forwarded to dot.LA by a company spokesperson, "builds a potential solution, then takes it to the government for potential consideration."
Lead art by Ian Hurley
Editor's note: This article has been updated to clarify that Andreessen Horowitz was involved in Anduril 's$450 million raise round, but was not the sole funder. Additionally, mentions of Anduril's $250 million contract with CBP have been updated to clarify that they were not negotiated with President Trump himself, but rather with members of his administration.
- Snapchat Accused of Being an 'Ecommerce' Site for Fentanyl - dot.LA ›
- A TikTok Timeline: The Rise and Pause of a Social Video Giant - dot ... ›
- Fisker and Apple Manufacturer Foxconn to Build Electric Cars - dot.LA ›
In Los Angeles, the cameras are everywhere. Cameras at traffic lights. Cameras on doorbells. Cameras on billions of smartphones. When your photo is snapped by these cameras, facial recognition technology can match your face to a database of millions of mug shots, potentially linking you to a crime.
Is this legal? Is this fair? Is this right?
These questions loom large over the technology, which the Los Angeles Police Department has been using since 2009. In November, an investigation by BuzzFeed News found that the LAPD had used the tech 30,000 times in the last decade, including using the controversial "Clearview AI," which trawls the internet for social media photos. Activists, furious over the investigation's findings, sought a ban on the tech. In January, the LAPD adopted what's effectively a "compromise" policy that prohibited the use of Clearview AI and other third party databases of photos, but allowed them to use Facial Recognition Technology (FRT) with their own in-house database of mugshots.
Flash forward six months. After road-testing the system, the LAPD said it's an effective tool that's being used with restraint, rapidly speeding up the time it takes to scroll through mug shots and helping to catch crooks. Activists say it should be forbidden, and that it disproportionately impacts communities of color.
"You have to look at the broader context, and where it fits in the broader 'stalker state,'" said Hamid Khan, founder of the Stop LAPD Spying Coalition. "This is not a moment in time, but a continuation of history."
The roots of the "stalker state," according to Khan, go back to the Lantern Laws of the 18th century, when Black people were required to carry lanterns after dark. Since then, we've seen a number of policies that have disproportionately targeted Black and Latinos, ranging from New York City's "stop and frisk" to the Department of Homeland Security's more recent "Suspicious Activity Reporting" program (a partnership between federal and local law enforcement), which allows anyone to report perceived sketchy behavior to the authorities. One audit found that Black people were reported in 21% of these "suspicious activities," even though they only represent 8% of Los Angeles County.
Activists worry FRT takes a pattern of discrimination and merges it with the brutal efficacy of surveillance tech.
"The danger now is that you're going to subject certain neighborhoods, certain people, and certain religious groups to this constant ever-present surveillance," said John Raphling, a senior researcher on criminal justice for Human Rights Watch. Raphling said that the Fourth Amendment, as established in 1979's Supreme Court case Brown v. Texas, means that the police can't simply waltz up to you and demand to see your ID for no reason.
"With FRT technology, that's out the window," said Raphling. "You're being identified at all times — who you are, what you're doing, who you're associating with." His concern is not just FRT itself, but the broader apparatus of sophisticated law enforcement – predictive analytics and data crunching from the photos, as now "you can't go out in public life without being under this surveillance."
The tech has been accused of racial bias, as research suggests the algorithms powering facial recognition lead to a higher chance of false matches for minorities and women. In one cheeky experiment, the ACLU used Amazon's facial recognition software ("Rekognition," which is not the software used by the LAPD) to compare the headshots of Congress with a database of mugshots, and they found that a whopping 39% of the false matches came from representatives who were people of color, even though they constitute just 20% of Congress.
The technology employed by the LAPD ignores pigmentation, according to an officer who oversees it, instead digitally mapping the face by looking at things like the distance between the eyes, or the distance from the nose to mouth.Shutterstock
Bita Amani, part of the Center for the Study of Racism, Social Justice, and Health, adds that constant surveillance likely poses an underappreciated health risk to marginalized communities, and that even if the facial recognition is flawless and accurate, it's just "strengthening and expanding the powers of the system that already targets the Black and the poor, and the people at the margins."
The police, of course, see all of this quite differently.
"This is not a sole identification tool. Ever," said Captain Christopher Zine of the LAPD. "This is basically a digital mug book." In the old days, you'd need to flip through stacks of photos and try to eyeball a match. It's slow. It's tedious. Now the system takes a photo and then queries it against the database Los Angeles County Regional Identification System database (LACRIS), which contains 7 million photos from 4 million people. (The LAPD clarified that the photos come from decades of arrests, and include non-L.A. residents.)
Lieutenant Derek Sabatini heads up the LACRIS system. He is well aware of the concerns over bias, but suggested that facial recognition technology, in a certain sense, can be employed to reduce the role of implicit bias. If humans do indeed harbor implicit biases, maybe tech can help inject objectivity?
In the traditional use of a photo, said Lt. Sabatini, "you might look at a male Hispanic and then filter that search" based on race or gender. But the FRT works differently. (The department prefers the term "PCT", for Photo Comparison Technology.) Sabatini said that the PCT employed by the LAPD ignores pigmentation, and instead digitally maps the face by looking at things like the distance between the eyes, or the distance from the nose to mouth.
Sabatini gives an example. One time the cops were trying to catch someone who was stealing packages off porches. They had a photo of a tattooed individual, and just from a casual glance, it appeared to be an Hispanic man. When they zapped the photo through the database, it was found to be an Hispanic woman, whom they arrested and charged in court. Sabatini said the facial recognition technology "actually takes away any bias in the user and just kind of goes, 'here's what's best, based on what you're providing me.'"
Some of the tension — and apprehension — seems to be a conflation between what's possible and what is actually being done. The activists fear the worst ("look at the history of the criminal justice system," said Khan) and the cops insist they are following a reasonable protocol.
"One of the big misconceptions is surveillance," said Sabanti, who explains that live feeds (such as continuous footage from an elevator camera) are not being dumped into the LAPD's records and then later mined for algorithmic dark sorcery. "You can't just have live feeds going through a system," he said. "We don't have the capability of that, and it would be against the law."
The department is also forbidden from using third-party photo databases or tools like Clearview AI. Every photo needs to be legally obtained, and to help solve a crime.
Captain Zine said that since the January protocols were enacted, the department created additional processes to ensure that only their own LACRIS database is being used, that extensive training is in place, and that only a small subset of the LAPD even has access to the tool. As for any official numbers, or quantified results and updates? This is still TBD. Zine said the LAPD is still conducting an internal review of FRT's effectiveness, and declined to provide numbers before that's finished (which he expects will be in September).
Critics like Khan, Raphling and Amani think that this middle ground is not enough, and that the potential for abuse — and the troubling history of discrimination — is itself reason enough to ban the tech. Khan points to reports that the LAPD sought photos from Ring doorbell cameras during the Black Lives Matter protests, as well as a high-profile false arrest in Detroit, although he is not aware of any specific abuses of the system, or examples of discrimination or misuse since the January protocol went into effect. The concerns seem to be more about the lurking threat of the ever-more-powerful "Stalker State" technology, as opposed to the more narrow use of the "digital mug book."
Others remain deeply skeptical. "Their argument is 'just trust us,'" said Raphling, arguing that law enforcement has a history of saying "we use it in this very minimal way," but that "it turns out they were using it vastly more." He added, more bluntly, "we would be suckers to trust them again."
Sabanti said he understands the broader concerns around a creepy, "Black Mirror"-esque surveillance state. "That stuff scares us as much as it scares the public. I don't want that," he said with a laugh. "I think we're all on the same team, and people forget that."
Lead image by Ian Hurley.
Correction: An earlier version of this post mis-spelled Hamid Khan's name.
- How Can L.A. Tech Promote More Diversity in Its Ranks? - dot.LA ›
- Unarmed CEO Tony Rice II Developed His Startup - dot.LA ›