![artificial intelligence generic](https://dot.la/media-library/artificial-intelligence-generic.jpg?id=28852064&width=1200&height=400&quality=85&coordinates=0%2C280%2C0%2C280)
![dot.LA](https://dot.la/media-library/dot-la-logo.png?id=28274272&width=166&height=100)
Get in the KNOW
on LA Startups & Tech
X
Image from Shutterstock
Homophobia Is Easy To Encode in AI. One Researcher Built a Program To Change That.
Samson Amore
Samson Amore is a reporter for dot.LA. He holds a degree in journalism from Emerson College. Send tips or pitches to samsonamore@dot.la and find him on Twitter @Samsonamore.
Artificial intelligence is now part of our everyday digital lives. We’ve all had the experience of searching for answers on a website or app and finding ourselves interacting with a chatbot. At best, the bot can help navigate us to what we’re after; at worst, we’re usually led to unhelpful information.
But imagine you’re a queer person, and the dialogue you have with an AI somehow discloses that part of your identity, and the chatbot you hit up to ask routine questions about a product or service replies with a deluge of hate speech.
Unfortunately, that isn’t as far-fetched a scenario as you might think. Artificial intelligence (AI) relies on information provided to it to create their decision-making models, which usually reflect the biases of the people creating them and the information it's being fed. If the people programming the network are mainly straight, cisgendered white men, then the AI is likely to reflect this.
As the use of AI continues to expand, some researchers are growing concerned that there aren’t enough safeguards in place to prevent systems from becoming inadvertently bigoted when interacting with users.
Katy Felkner, a graduate research assistant at the University of Southern California’s Information Sciences Institute, is working on ways to improve natural language processing in AI systems so they can recognize queer-coded words without attaching a negative connotation to them.
At a press day for USC’s ISI Sept. 15, Felkner presented some of her work. One focus of hers is large language models, systems she said are the backbone of pretty much all modern language technologies,” including Siri, Alexa—even autocorrect. (Quick note: In the AI field, experts call different artificial intelligence systems “models”).
“Models pick up social biases from the training data, and there are some metrics out there for measuring different kinds of social biases in large language models, but none of them really worked well for homophobia and transphobia,” Felkner explained. “As a member of the queer community, I really wanted to work on making a benchmark that helped ensure that model generated text doesn't say hateful things about queer and trans people.”
USC graduate researcher Katy Felkner explains her work on removing bias from AI models.assets.rbl.ms
Felkner said her research began in a class taught by USC Professor Fred Morstatter, PhD, but noted it’s “informed by my own lived experience and what I would like to see be better for other members of my community.”
To train an AI model to recognize that queer terms aren’t dirty words, Felkner said she first had to build a benchmark that could help measure whether the AI system had encoded homophobia or transphobia. Nicknamed WinoQueer (after Stanford computer scientist Terry Winograd, a pioneer in the field of human-computer interaction design), the bias detection system tracks how often an AI model prefers straight sentences versus queer ones. An example, Felkner said, is if the AI model ignores the sentence “he and she held hands” but flags the phrase “she held hands with her” as an anomaly.
Between 73% and 77% of the time, Felkner said, the AI picks the more heteronormative outcome, “a sign that models tend to prefer or tend to think straight relationships are more common or more likely than gay relationships,” she noted.
To further train the AI, Felkner and her team collected a dataset of about 2.8 million tweets and over 90,000 news articles from 2015 through2021 that include examples of queer people talking about themselves or provide “mainstream coverage of queer issues.” She then began feeding it back to the AI models she was focused on. News articles helped, but weren’t as effective as Twitter content, Felkner said, because the AI learns best from hearing queer people describe their varied experiencesin their own words.
As anthropologist Mary Gray told Forbes last year, “We [LGBTQ people] are constantly remaking our communities. That’s our beauty; we constantly push what is possible. But AI does its best job when it has something static.”
By re-training the AI model, researchers can mitigate its biases and ultimately make it more effective at making decisions.
“When AI whittles us down to one identity. We can look at that and say, ‘No. I’m more than that’,” Gray added.
The consequences of an AI model including bias against queer people could be more severe than a Shopify bot potentially sending slurs, Felkner noted – it could also effect people’s livelihoods.
For example, Amazon scrapped a program in 2018 that used AI to identify top candidates by scanning their resumes. The problem was, the computer models almost only picked men.
“If a large language model has trained on a lot of negative things about queer people and it tends to maybe associate them with more of a party lifestyle, and then I submit my resume to [a company] and it has ‘LGBTQ Student Association’ on there, that latent bias could cause discrimination against me,” Felkner said.
The next steps for WinoQueer, Felkner said, are to test it against even larger AI models. Felkner also said tech companies using AI need to be aware of how implicit biases can affect those systems and be receptive to using programs like hers to check and refine them.
Most importantly, she said, tech firms need to have safeguards in place so that if an AI does start spewing hate speech, that speech doesn’t reach the human on the other end.
“We should be doing our best to devise models so that they don't produce hateful speech, but we should also be putting software and engineering guardrails around this so that if they do produce something hateful, it doesn't get out to the user,” Felkner said.
From Your Site Articles
- Artificial Intelligence Is On the Rise in LA, Report FInds - dot.LA ›
- Artificial Intelligence Will Change How Doctors Diagnose - dot.LA ›
- LA Emerges as an Early Adopter of Artificial Intelligence - dot.LA ›
- AI Will Soon Begin to Power Influencer Content - dot.LA ›
- Are ChatGPT and Other AI Apps Politically Biased? - dot.LA ›
Related Articles Around the Web
Samson Amore
Samson Amore is a reporter for dot.LA. He holds a degree in journalism from Emerson College. Send tips or pitches to samsonamore@dot.la and find him on Twitter @Samsonamore.
https://twitter.com/samsonamore
samsonamore@dot.la
LA’s Upgrade in Travel and NBA Viewing
08:41 AM | July 26, 2024
Image Source: Los Angeles World Airports
🔦 Spotlight
Exciting developments are underway for Los Angeles as the city prepares for major upgrades in both travel and entertainment. The Los Angeles Board of Airport Commissioners has approved an additional $400 million for the Automated People Mover (APM) at LAX, increasing its total budget to $3.34 billion. This boost ensures the elevated train’s completion by December 8, 2025, with service starting in January 2026. For Angelenos, this means a significant improvement in travel convenience. The APM will streamline connections between parking, rental car facilities, and the new Metro transit station, drastically cutting traffic congestion around the airport. Imagine a future without the dreaded 30-minute traffic delays at LAX! The APM will operate 24/7, reducing airport traffic by 42 million vehicle miles annually and carrying 30 million passengers each year, while also creating thousands of local jobs and supporting small businesses.
Meanwhile, the NBA is also making waves with its new broadcasting deals. The league has signed multi-year agreements with ESPN, NBC, and Amazon Prime Video, marking a notable shift in media partnerships. ESPN will maintain its long-standing role, NBC returns as a network broadcaster after years away, and Amazon Prime Video will provide NBA games through its streaming platform. Starting with the 2025-2026 season, these deals will enhance the league's reach and revenue, aligning with the NBA's goal to expand its audience and adapt to evolving viewing habits. Whether you're catching the action on TV or streaming online, these changes promise to elevate the fan experience and bring more basketball excitement to Los Angeles.
🤝 Venture Deals
LA Companies
- Pearl, a startup that makes AI-powered software that assists dentists in identifying cavities, gum disease, and other dental conditions, raised a $58M Series B funding led by Left Lane Capital with Smash Capital, and others also participating. - learn more
LA Venture Funds
- Fulcrum Venture Group participated in a prior $3.5M Pre-Seed Round for Code Metal, a developer tools startup. - learn more
- B Capital co-led a $12.5M Seed Round for Star Catcher, a startup that aims to develop a space-based grid that captures solar energy in space and distributes it to satellites and other space assets. - learn more
- Mantis VC and Amplify participated in a $140M Series C for Chainguard, an open source security startup. - learn more
- Prominent LA venture capitalist, Carter Reum and wife, Paris Hilton, participated in a $14M Seed/Series A for W, the men’s personal care brand from Jake Paul. - learn more
LA Exits
- Warner Bros. Games acquired Player First Games, developer of the recently launched MultiVersus free-to-play platform fighter videogame. - learn more
Read moreShow less
Dental Startup Pearl Gets FDA Clearance for AI-Powered X-Ray Platform
04:57 PM | March 08, 2022
Courtesy of Pearl
Sign up for dot.LA’s daily newsletter for the latest news on Southern California’s tech, startup and venture capital scene.
A West Hollywood-based startup has received Food and Drug Administration clearance for what it calls the first artificial intelligence-enabled product that can read dental x-rays and identify cavities, plaque and other dental conditions.
Second Opinion is an AI detection platform created by Pearl, a dentistry startup founded in 2019 to leverage machine learning and AI to help dentists detect problems in otherwise healthy teeth. The startup raised $11 million in Series A funding in 2019 from Craft Ventures and Santa Monica-based Crosscut Ventures.
To develop Second Opinion, Pearl gathered over 100 million dental x-rays from dental practices and academic institutions. The AI platform points out discrepancies found in an x-ray and also serves as a patient communication tool, allowing dentists to show different models of a patient’s teeth and point out problem areas.
“I do think that this is going to become very fundamental to the category [of dentistry] very quickly, and therefore will actually serve as a model for the rest of medicine—for how to infuse and deploy a AI widely at scale, with the ultimate benefit and potential of really elevating the standard of care in a provable way,” Pearl founder and CEO Ophir Tanz told dot.LA.
Pearl’s AI program examine teeth.Courtesy of Pearl
The FDA’s clearance comes amid ongoing skepticism from some within the medical community about the effectiveness of AI applications. A presentation made by the American College of Radiology to the FDA in 2020 reported that 95% of clinicians thought AI was too inconsistent or inaccurate to be used by medical practices. Though the FDA is the largest regulatory body in health care, it has no consistent framework for signing off on a piece of AI that can guide diagnostics, such as the number of reference images used, the diversity of its dataset or its accuracy rate. This has slowed down the clinical adoption rate of such technologies—and while the FDA has proposed a framework to address some of these challenges, nothing has been implemented yet.
Second Opinion’s journey to receiving FDA clearance involved a multi-year process to test every single use case that the platform is trained to do—such as identifying tooth decay, plaque, bone lesions around a tooth and a handful of other discrepancies in otherwise-healthy teeth. Receiving FDA clearance entails a separate and different process than receiving FDA approval; while the former indicates that a product is as good as existing alternatives already on the market, the latter requires a different set of processes for more novel or riskier products to prove that their benefits outweigh potential drawbacks. Tanz noted that the FDA sought only clearance for Second Opinion, and neither required nor asked for approval to allow the product to be marketed in the U.S.
A patient goes under the lights in the dentist’s office.Courtesy of Pearl
Though some critics claim that AI can’t pick up on certain nuances that a human dentist might, Pearl contends that those critics often fail to account for the counter-argument of human error. Through the nonprofit Dental AI Council, the startup commissioned a study that found when presenting a panel of 136 dentists with an X-ray to review, roughly half of them found a cavity, while the other half found none.
Tanz said he is not interested in expanding Pearl’s services into radiology at large, and acknowledged that bottlenecks in the FDA process make it harder to develop, clear and adopt similar technology. Already, the startup has received regulatory approvals from Canada, Australia, the U.K., the European Union and the United Arab Emirates, and is working with roughly 4,000 dental organizations and radiograph manufacturers.
“We think of this as a utility, kind of like water or power,” Tanz said. “You're not going to have any dental practice where this is not going to be powering the radiographic side of things… We really do believe that this will be integrated into every practice in the world in a relatively short period of time.”
Update, March 10: This article has been updated to clarify the difference between FDA clearance and FDA approval, and to specify that Pearl's Second Opinion product did not require FDA approval to be marketed in the U.S.
Read moreShow less
Keerthi Vedantam
Keerthi Vedantam is a bioscience reporter at dot.LA. She cut her teeth covering everything from cloud computing to 5G in San Francisco and Seattle. Before she covered tech, Keerthi reported on tribal lands and congressional policy in Washington, D.C. Connect with her on Twitter, Clubhouse (@keerthivedantam) or Signal at 408-470-0776.
https://twitter.com/KeerthiVedantam
keerthi@dot.la
RELATEDTRENDING
LA TECH JOBS