artificial intelligence generic
Image from Shutterstock

Homophobia Is Easy To Encode in AI. One Researcher Built a Program To Change That.

Artificial intelligence is now part of our everyday digital lives. We’ve all had the experience of searching for answers on a website or app and finding ourselves interacting with a chatbot. At best, the bot can help navigate us to what we’re after; at worst, we’re usually led to unhelpful information.

But imagine you’re a queer person, and the dialogue you have with an AI somehow discloses that part of your identity, and the chatbot you hit up to ask routine questions about a product or service replies with a deluge of hate speech.


Unfortunately, that isn’t as far-fetched a scenario as you might think. Artificial intelligence (AI) relies on information provided to it to create their decision-making models, which usually reflect the biases of the people creating them and the information it's being fed. If the people programming the network are mainly straight, cisgendered white men, then the AI is likely to reflect this.

As the use of AI continues to expand, some researchers are growing concerned that there aren’t enough safeguards in place to prevent systems from becoming inadvertently bigoted when interacting with users.

Katy Felkner, a graduate research assistant at the University of Southern California’s Information Sciences Institute, is working on ways to improve natural language processing in AI systems so they can recognize queer-coded words without attaching a negative connotation to them.

At a press day for USC’s ISI Sept. 15, Felkner presented some of her work. One focus of hers is large language models, systems she said are the backbone of pretty much all modern language technologies,” including Siri, Alexa—even autocorrect. (Quick note: In the AI field, experts call different artificial intelligence systems “models”).

“Models pick up social biases from the training data, and there are some metrics out there for measuring different kinds of social biases in large language models, but none of them really worked well for homophobia and transphobia,” Felkner explained. “As a member of the queer community, I really wanted to work on making a benchmark that helped ensure that model generated text doesn't say hateful things about queer and trans people.”

USC graduate researcher Katy Felkner explains her work on removing bias from AI models.assets.rbl.ms

Felkner said her research began in a class taught by USC Professor Fred Morstatter, PhD, but noted it’s “informed by my own lived experience and what I would like to see be better for other members of my community.”

To train an AI model to recognize that queer terms aren’t dirty words, Felkner said she first had to build a benchmark that could help measure whether the AI system had encoded homophobia or transphobia. Nicknamed WinoQueer (after Stanford computer scientist Terry Winograd, a pioneer in the field of human-computer interaction design), the bias detection system tracks how often an AI model prefers straight sentences versus queer ones. An example, Felkner said, is if the AI model ignores the sentence “he and she held hands” but flags the phrase “she held hands with her” as an anomaly.

Between 73% and 77% of the time, Felkner said, the AI picks the more heteronormative outcome, “a sign that models tend to prefer or tend to think straight relationships are more common or more likely than gay relationships,” she noted.

To further train the AI, Felkner and her team collected a dataset of about 2.8 million tweets and over 90,000 news articles from 2015 through2021 that include examples of queer people talking about themselves or provide “mainstream coverage of queer issues.” She then began feeding it back to the AI models she was focused on. News articles helped, but weren’t as effective as Twitter content, Felkner said, because the AI learns best from hearing queer people describe their varied experiencesin their own words.

As anthropologist Mary Gray told Forbes last year, “We [LGBTQ people] are constantly remaking our communities. That’s our beauty; we constantly push what is possible. But AI does its best job when it has something static.”

By re-training the AI model, researchers can mitigate its biases and ultimately make it more effective at making decisions.

“When AI whittles us down to one identity. We can look at that and say, ‘No. I’m more than that’,” Gray added.

The consequences of an AI model including bias against queer people could be more severe than a Shopify bot potentially sending slurs, Felkner noted – it could also effect people’s livelihoods.

For example, Amazon scrapped a program in 2018 that used AI to identify top candidates by scanning their resumes. The problem was, the computer models almost only picked men.

“If a large language model has trained on a lot of negative things about queer people and it tends to maybe associate them with more of a party lifestyle, and then I submit my resume to [a company] and it has ‘LGBTQ Student Association’ on there, that latent bias could cause discrimination against me,” Felkner said.

The next steps for WinoQueer, Felkner said, are to test it against even larger AI models. Felkner also said tech companies using AI need to be aware of how implicit biases can affect those systems and be receptive to using programs like hers to check and refine them.

Most importantly, she said, tech firms need to have safeguards in place so that if an AI does start spewing hate speech, that speech doesn’t reach the human on the other end.

“We should be doing our best to devise models so that they don't produce hateful speech, but we should also be putting software and engineering guardrails around this so that if they do produce something hateful, it doesn't get out to the user,” Felkner said.

https://twitter.com/samsonamore
samsonamore@dot.la
LA’s Upgrade in Travel and NBA Viewing
Image Source: Los Angeles World Airports

🔦 Spotlight

Exciting developments are underway for Los Angeles as the city prepares for major upgrades in both travel and entertainment. The Los Angeles Board of Airport Commissioners has approved an additional $400 million for the Automated People Mover (APM) at LAX, increasing its total budget to $3.34 billion. This boost ensures the elevated train’s completion by December 8, 2025, with service starting in January 2026. For Angelenos, this means a significant improvement in travel convenience. The APM will streamline connections between parking, rental car facilities, and the new Metro transit station, drastically cutting traffic congestion around the airport. Imagine a future without the dreaded 30-minute traffic delays at LAX! The APM will operate 24/7, reducing airport traffic by 42 million vehicle miles annually and carrying 30 million passengers each year, while also creating thousands of local jobs and supporting small businesses.

Meanwhile, the NBA is also making waves with its new broadcasting deals. The league has signed multi-year agreements with ESPN, NBC, and Amazon Prime Video, marking a notable shift in media partnerships. ESPN will maintain its long-standing role, NBC returns as a network broadcaster after years away, and Amazon Prime Video will provide NBA games through its streaming platform. Starting with the 2025-2026 season, these deals will enhance the league's reach and revenue, aligning with the NBA's goal to expand its audience and adapt to evolving viewing habits. Whether you're catching the action on TV or streaming online, these changes promise to elevate the fan experience and bring more basketball excitement to Los Angeles.

Read moreShow less
​Dental startup Pearl uses its x-rays to show teeth.
Courtesy of Pearl

Sign up for dot.LA’s daily newsletter for the latest news on Southern California’s tech, startup and venture capital scene.

A West Hollywood-based startup has received Food and Drug Administration clearance for what it calls the first artificial intelligence-enabled product that can read dental x-rays and identify cavities, plaque and other dental conditions.

Second Opinion is an AI detection platform created by Pearl, a dentistry startup founded in 2019 to leverage machine learning and AI to help dentists detect problems in otherwise healthy teeth. The startup raised $11 million in Series A funding in 2019 from Craft Ventures and Santa Monica-based Crosscut Ventures.

Read moreShow less
Keerthi Vedantam

Keerthi Vedantam is a bioscience reporter at dot.LA. She cut her teeth covering everything from cloud computing to 5G in San Francisco and Seattle. Before she covered tech, Keerthi reported on tribal lands and congressional policy in Washington, D.C. Connect with her on Twitter, Clubhouse (@keerthivedantam) or Signal at 408-470-0776.

https://twitter.com/KeerthiVedantam
keerthi@dot.la
RELATEDTRENDING
LA TECH JOBS
interchangeLA