‘Nobody Runs a Criminal Record Check on a Company.’ Reboot LA Teaches Former Prisoners to Run Their Own Businesses

Breanna De Vera

Breanna de Vera is dot.LA's editorial intern. She is currently a senior at the University of Southern California, studying journalism and English literature. She previously reported for the campus publications The Daily Trojan and Annenberg Media.

‘Nobody Runs a Criminal Record Check on a Company.’ Reboot LA Teaches Former Prisoners to Run Their Own Businesses
Photo by Brooke Cagle on Unsplash

It can be nearly impossible for former convicts to find a job in L.A. A new incubator is training formerly incarcerated Angelenos to start their own businesses instead.

"Nobody runs a criminal record check on a company," said Reboot LA program director Claudia Diaz.

Reboot LA will offer 28 formerly incarcerated individuals a chance to participate in their incubator program offered in partnership with the city of Los Angeles this fall. Its curriculum comes from Sabio Enterprises, a coding and educator developer community that provides boot camps for future software engineers.

"They're taking control, but just being hired on their own digital portfolios and their own talent," she said.

Because of the stigma, many people who have done time in prison or jail face higher hurdles to getting a job. Owning a company, instead of working for someone else or consulting as an individual is often an easier path toward economic sustainability. And studies show that jobs are also associated with lower recidivism.

Reboot LA helps students build skills to be competitive, including how to source clients, create a digital portfolio, perform full stack development and, ultimately, own their own tech consulting company.

Home Boy Bakery & Home Girl Cafe

Reboot LA's Roots

Sabio co-founder and chief executive officer Liliana Monge came up with the idea while working with the Anti-Recidivism Coalition (ARC) in L.A. She thought Sabio's curriculum could help people with criminal records gain skills to work in tech and started devising a program to steer them toward employment after they finished a boot camp.

She hosted biweekly coding information sessions at Homeboy Industries and the Anti-Recidivism Coalition last year to gauge interest and engage possible participants.

But when Los Angeles went into lockdown, those classes went online and recruitment got harder. Then the city of Los Angeles agreed to cover the costs for 28 enrollees, and applications started to roll in. "In October, we finally got our first 100% remote program participant," she said.

Monge didn't want to disclose the names of participants because they attend classes alongside other Sabio students. She doesn't want her students to have to deal with the stigmas around incarceration. But she would say said the first participant is a Latina woman.

"[There is] a lack of women talent in the tech industry," Monge said. "So we're excited that our first program participant is a woman of color. And we look forward to having more program participants that are super diverse, and we want gender parity as well."

The city of Los Angeles was already working with ARC to provide job training to formerly incarcerated individuals. The Los Angeles Economic and Workforce Development Department (EWDD) noticed the tech workshops Sabio was doing with ARC. When Monge decided to expand the pilot program, EWDD worked closely with her to make Reboot LA available to all Angelenos with a record.

"The tech industry is thriving in Los Angeles, yet for some Angelenos, finding a job in this realm feels completely out of reach," said Carolyn Hull, general manager of EWDD. "EWDD invests in incubators as part of the city's mission to cultivate the city's clean tech industry and create opportunities for the city's underserved populations to gain access to the tech industry."

Few Legal Protections for Those with a Criminal Record

Angelenos with a criminal record are not legally protected against hiring discrimination based on their record. People with incarceration histories are four to six times more likely to be unemployed than peers without a record, according to data from the Prison Policy Initiative.

There have been a few recent measures in California that aim to provide them with protections against discrimination. But for the most part, these efforts haven't increased opportunities for formerly incarcerated people in the tech industry.

Last month, Governor Gavin Newsom passed an Assembly bill that expunges the criminal records of former prisoners who fought against the California wildfires Not all prisoners are on the front lines of fighting fires, however. And this measure is intended to help formerly incarcerated people seeking employment in emergency response.

In 2018, California passed the Fair Chance Act, known as "Ban the Box," which refers to a box on job applications that indicate whether the applicant has a criminal record. California employers cannot ask applicants about their conviction histories. But that doesn't protect employees from a criminal history check after they are hired, according to the California Department of Fair Employment and Housing. And any job that already requires a background check, such as those in finance or the government, is not subject to this law.

Emiliano Lopez and Guillermo "Memo" Armenta founded a web app and development company called FutureWork.Courtesy Reboot LA

Future Work: Rate My Parole Agent

Emiliano Lopez and Guillermo "Memo" Armenta — two Sabio graduates — helped Monge develop the ideas for Reboot LA. Both are social justice advocates and their work ranges from community outreach to housing people coming out of incarceration.

"Both my mom and I come from a marginalized community, we both formerly incarcerated folks," said Lopez. "We took advantage of the opportunity that ARC and Sabio had at one time where we were able to join the coding boot camp on a scholarship."

Since graduating, they have founded a web app and development company called Future Work.

Lopez and Armenta were introduced to Sabio's programming at the Anti-Recidivism Coalition. They took a 12-week coding bootcamp, and after finishing, started looking for work in the tech space.

"From there, Memo and I decided to look for jobs in the coding world. And we noticed that it was largely closed for people with a background," said Lopez.

They brainstormed and worked on small projects for a while, which Lopez saved in his Google Drive within a folder called "future work."

"We were just fed up with the way things were going. And we just threw our hands up in the air and we went downtown. We filed to create a company called Future Work, named after the folder on my Google Drive," said Lopez. "We're a functioning part-time business right now. And currently, we have a little product to offer."

That product is an app aimed at improving relationships between parole agents and parolees, for people with backgrounds similar to their own.

"[It's] going to be a "rating app" for parole agents, to understand what the relationship between parole agent and parolee is," said Lopez. "What that looks like on the grand scale is, 'What does that culture look like, with an entire office of parole agents and an entire community of people on parole?' [We'll] use that data to improve those relationships in the future, so we can build a safer society — one that is based on mutual respect, and the common goal of having someone succeed and not go back into the institution."

Reboot LA is still looking for participants for its first official cohort of participants. Applications are available on their website. Los Angeles residents can apply to the free program, and cohorts are selected every month.

Full-time courses run for 13 weeks, six times a week. Part-time courses meet on weekday evenings and Saturdays. Participants are trained in Microsoft's .Net platform, Node.js development, client side frameworks, database architecture and API tools.

"L.A. is really kind of brimming with exceptional tech talent," Monge said." And so we're excited to make sure that through this program, we can bring in diverse voices to the tech ecosystem."

Subscribe to our newsletter to catch every headline.

Creandum’s Carl Fritjofsson on the Differences Between the Startup Ecosystem in Europe and the U.S.

Decerry Donato

Decerry Donato is a reporter at dot.LA. Prior to that, she was an editorial fellow at the company. Decerry received her bachelor's degree in literary journalism from the University of California, Irvine. She continues to write stories to inform the community about issues or events that take place in the L.A. area. On the weekends, she can be found hiking in the Angeles National forest or sifting through racks at your local thrift store.

Carl Fritjofsson
Carl Fritjofsson

On this episode of the LA Venture podcast, Creandum General Partner Carl Fritjofsson talks about his venture journey, why Generative-AI represents an opportunity to rethink products from the ground up, and why Q4 2023 and Q1 2024 could be "pretty bloody" for startups.

Read moreShow less

AI Is Undergoing Some Growing Pains at a Pivotal Moment in Its Development

Lon Harris
Lon Harris is a contributor to dot.LA. His work has also appeared on ScreenJunkies, RottenTomatoes and Inside Streaming.
AI Is Undergoing Some Growing Pains at a Pivotal Moment in Its Development
Evan Xie

One way to measure just how white-hot AI development has become: the world is running out of the advanced graphics chips necessary to power AI programs. While Intel central processing units were once the most sought-after industry leaders, advanced graphics chips like Nvidia’s are designed to run multiple computations simultaneously, a baseline necessity for many AI models.

An early version of ChatGPT required around 10,000 graphics chips to run. By some estimates, newer updates require 3-5 times that amount of processing power. As a result of this skyrocketing demand, shares of Nvidia have jumped 165% so far this year.

Building on this momentum, this week, Nvidia revealed a line-up of new AI-related projects including an Israeli supercomputer project and a platform utilizing AI to help video game developers. For smaller companies and startups, however, getting access to the vital underlying technology that powers AI development is already becoming less about meritocracy and more about “who you know.” According to the Wall Street Journal, Elon Musk scooped up a valuable share of server space from Oracle this year before anyone else got a crack at it for his new OpenAI rival, X.AI.

The massive demand for Nvidia-style chips has also created a lucrative secondary market, where smaller companies and startups are often outbid by larger and more established rivals. One startup founder compares the fevered crush of the current chip marketplace to toilet paper in the early days of the pandemic. For those companies that don’t get access to the most powerful chips or enough server space in the cloud, often the only remaining option is to simplify their AI models, so they can run more efficiently.

Beyond just the design of new AI products, we’re also at a key moment for users and consumers, who are still figuring out what sorts of applications are ideal for AI and which ones are less effective, or potentially even unethical or dangerous. There’s now mounting evidence that the hype around some of these AI tools is reaching a lot further than the warnings about its drawbacks.

JP Morgan Chase is training a new AI chatbot to help customers choose financial securities and stocks, known as IndexGPT. For now, they insist that it’s purely supplemental, designed to advise and not replace money managers, but it may just be a matter of time before job losses begin to hit financial planners along with everyone else.

A lawyer in New York just this week was busted by a judge for using ChatGPT as part of his background research. When questioned by the judge, lawyer Peter LoDuco revealed that he’d farmed out some research to a colleague, Steven A. Schwartz, who had consulted with ChatGPT on the case. Schwartz was apparently unaware that the AI chatbot was able to lie – transcripts even show him questioning ChatGPT’s responses and the bot assuring him that these were, in fact, real cases and citations.

New research by Marucie Jakesch, a doctoral student from Cornell University, suggests that even users who are more aware than Schwartz about how AI works and its limitations may still be impacted in subtle and subconscious ways by its output.

Not to mention, according to data from Intelligent.com, high school and college students already – on the whole – prefer utilizing ChatGPT for help with schoolwork over a human tutor. The survey also notes that advanced students tend to report getting more out of using ChatGPT-type programs than beginners, likely because they have more baseline knowledge and can construct better and more informative prompts.

But therein lies the big drawback to using ChatGPT and other AI tools for education. At least so far, they’re reliant on the end user writing good prompts and having some sense about how to organize a lesson plan for themselves. Human tutors, on the other hand, have a lot of personal experience in these kinds of areas. Someone who instructs others in foreign languages professionally probably has a good inherent sense of when you need to focus on expanding your vocabulary vs. drilling certain kinds of verb and tense conjugations. They’ve helped many other students prepare for tests, quizzes, and real-world challenges, while computer software can only guess at what kinds of scenarios its proteges will face.

A recent Forbes editorial by academic Thomas Davenport suggests that, while AI is getting all the hype right now, other forms of computing or machine learning are still going to be more effective for a lot of basic tasks. From a marketing perspective in 2023, it’s helpful for a tech company to throw the “AI” brand around, but it’s not magically going to be the answer for every problem.

Davenport points to a similar (if smaller) whirlwind of excitement around IBM’s “Watson” in the early 2010s, when it was famously able to take out human “Jeopardy!’ champions. It turns out, Watson was a general knowledge engine, really best suited for jobs like playing “Jeopardy.” But after the software gained celebrity status, people tried to use it for all sorts of advanced applications, like designing cancer drugs or providing investment advice. Today, few people turn to Watson for these kinds of solutions. It’s just the wrong tool for the job. In that same way, Davenport suggests that generative AI is in danger of being misapplied.

While the industry and end users both race to solve the AI puzzle in real time, governments are also feeling pressure to step in and potentially regulate the AI industry. This is much easier said than done, though, as politicians face the same kinds of questions and uncertainty as everyone else.

OpenAI CEO Sam Altman has been calling for governments to begin regulating AI, but just this week, he suggested that the company might pull out of the European Union entirely if the regulations were too onerous. Specifically, Altman worries that attempts to narrow what kinds of data can be used to train AI systems – specifically blocking copyrighted material – might well prove impossible. “If we can comply, we will, and if we can’t, we’ll cease operating,” Altman told Time. “We will try, but there are technical limits to what’s possible.” (Altman has already started walking this threat back, suggesting he has no immediate plans to exit the EU.)

In the US, The White House has been working on a “Blueprint for an AI Bill of Rights,” but it’s non-binding, just a collection of largely vague suggestions. It’s one thing to agree “consumers shouldn’t face discrimination from an algorithm” and “everyone should be protected from abusive data practices and have agency over how their data is used.” But enforcement is an entirely different animal. A lot of these issues already exist in tech, and are much larger than AI, and the US government already doesn’t do much about them.

Additionally, it’s possible AI regulations won’t work well at all if they aren’t global. Even if you set some policies and get an entire nation’s government to agree, how to set similar worldwide protocols? What if US and Europe agree but India doesn’t? Everyone around the world accesses roughly the same internet, so without any kind of international standard, it’s going to be much harder for individual nations to enforce specific rules. As with so many other AI developments, there’s inherent danger in patchwork regulations; it could allow some companies, or regions, or players to move forward while others are unfairly or ineffectively stymied or held back.

The same kinds of socio-economic concerns around AI that we have nationally – some sectors of the work force left behind, the wealthiest and most established players coming in to the new market with massive advantages, the rapid spread of misinformation – are all, in actuality, global concerns. Just as the hegemony of Microsoft and Google threaten the ability of new players to enter the AI space, the West’s early dominance of AI tech threatens to push out companies and innovations from emerging markets like Southeast Asia, Subsaharan Africa, and Central America. Left unfettered, AI could potentially deepen social, economic, and digital divisions both within and between all of these societies.

Undaunted, some governments aren’t waiting around for these tools to develop any further before they start attempting to regulate them. New York City has already set up some rules about how AI can be used during the hiring process while will take effect in July. The law requires any company using AI software in hiring to notify candidates that it’s being used, and to have independent auditors check the system annually for bias.

This sort of piecemeal figure-it-out-as-we-go approach is probably what’s going to be necessary, at least short-term, as AI development shows zero signs of slowing down or stopping any time soon. Though there’s some disagreement among experts, most analysts agree with Wharton professor and economist Jeremy Siegel, who told CNBC this week that AI is not yet a bubble. He pointed to the Nvidia earnings as a sign the market remains healthy and not overly frothy. So, at least for now, the feverish excitement around AI is not going to burst like a late ‘90s startup stock. The world needs to prepare as if this technology is going to be with us for a while.

Rivian CEO Teases R2, New Features in Instagram AMA

David Shultz

David Shultz reports on clean technology and electric vehicles, among other industries, for dot.LA. His writing has appeared in The Atlantic, Outside, Nautilus and many other publications.

Rivian CEO Teases R2, New Features in Instagram AMA

Rivian CEO RJ Scaringe took to Instagram last weekend to answer questions from the public about his company and its future. Topics covered included new colors, sustainability, production ramp, new products and features. Speaking of which, viewers also got a first look at the company’s much-anticipated R2 platform, albeit made of clay and covered by a sheet, but hey, that’s…something. If you don’t want to watch the whole 33 minute video, which is now also on Youtube, we’ve got the highlights for you.

Read moreShow less