![artificial intelligence generic](https://dot.la/media-library/artificial-intelligence-generic.jpg?id=28852064&width=1200&height=400&quality=85&coordinates=0%2C280%2C0%2C280)
![dot.LA](https://dot.la/media-library/dot-la-logo.png?id=28274272&width=166&height=100)
Get in the KNOW
on LA Startups & Tech
X
Image from Shutterstock
Homophobia Is Easy To Encode in AI. One Researcher Built a Program To Change That.
Samson Amore
Samson Amore is a reporter for dot.LA. He holds a degree in journalism from Emerson College. Send tips or pitches to samsonamore@dot.la and find him on Twitter @Samsonamore.
Artificial intelligence is now part of our everyday digital lives. We’ve all had the experience of searching for answers on a website or app and finding ourselves interacting with a chatbot. At best, the bot can help navigate us to what we’re after; at worst, we’re usually led to unhelpful information.
But imagine you’re a queer person, and the dialogue you have with an AI somehow discloses that part of your identity, and the chatbot you hit up to ask routine questions about a product or service replies with a deluge of hate speech.
Unfortunately, that isn’t as far-fetched a scenario as you might think. Artificial intelligence (AI) relies on information provided to it to create their decision-making models, which usually reflect the biases of the people creating them and the information it's being fed. If the people programming the network are mainly straight, cisgendered white men, then the AI is likely to reflect this.
As the use of AI continues to expand, some researchers are growing concerned that there aren’t enough safeguards in place to prevent systems from becoming inadvertently bigoted when interacting with users.
Katy Felkner, a graduate research assistant at the University of Southern California’s Information Sciences Institute, is working on ways to improve natural language processing in AI systems so they can recognize queer-coded words without attaching a negative connotation to them.
At a press day for USC’s ISI Sept. 15, Felkner presented some of her work. One focus of hers is large language models, systems she said are the backbone of pretty much all modern language technologies,” including Siri, Alexa—even autocorrect. (Quick note: In the AI field, experts call different artificial intelligence systems “models”).
“Models pick up social biases from the training data, and there are some metrics out there for measuring different kinds of social biases in large language models, but none of them really worked well for homophobia and transphobia,” Felkner explained. “As a member of the queer community, I really wanted to work on making a benchmark that helped ensure that model generated text doesn't say hateful things about queer and trans people.”
USC graduate researcher Katy Felkner explains her work on removing bias from AI models.assets.rbl.ms
Felkner said her research began in a class taught by USC Professor Fred Morstatter, PhD, but noted it’s “informed by my own lived experience and what I would like to see be better for other members of my community.”
To train an AI model to recognize that queer terms aren’t dirty words, Felkner said she first had to build a benchmark that could help measure whether the AI system had encoded homophobia or transphobia. Nicknamed WinoQueer (after Stanford computer scientist Terry Winograd, a pioneer in the field of human-computer interaction design), the bias detection system tracks how often an AI model prefers straight sentences versus queer ones. An example, Felkner said, is if the AI model ignores the sentence “he and she held hands” but flags the phrase “she held hands with her” as an anomaly.
Between 73% and 77% of the time, Felkner said, the AI picks the more heteronormative outcome, “a sign that models tend to prefer or tend to think straight relationships are more common or more likely than gay relationships,” she noted.
To further train the AI, Felkner and her team collected a dataset of about 2.8 million tweets and over 90,000 news articles from 2015 through2021 that include examples of queer people talking about themselves or provide “mainstream coverage of queer issues.” She then began feeding it back to the AI models she was focused on. News articles helped, but weren’t as effective as Twitter content, Felkner said, because the AI learns best from hearing queer people describe their varied experiencesin their own words.
As anthropologist Mary Gray told Forbes last year, “We [LGBTQ people] are constantly remaking our communities. That’s our beauty; we constantly push what is possible. But AI does its best job when it has something static.”
By re-training the AI model, researchers can mitigate its biases and ultimately make it more effective at making decisions.
“When AI whittles us down to one identity. We can look at that and say, ‘No. I’m more than that’,” Gray added.
The consequences of an AI model including bias against queer people could be more severe than a Shopify bot potentially sending slurs, Felkner noted – it could also effect people’s livelihoods.
For example, Amazon scrapped a program in 2018 that used AI to identify top candidates by scanning their resumes. The problem was, the computer models almost only picked men.
“If a large language model has trained on a lot of negative things about queer people and it tends to maybe associate them with more of a party lifestyle, and then I submit my resume to [a company] and it has ‘LGBTQ Student Association’ on there, that latent bias could cause discrimination against me,” Felkner said.
The next steps for WinoQueer, Felkner said, are to test it against even larger AI models. Felkner also said tech companies using AI need to be aware of how implicit biases can affect those systems and be receptive to using programs like hers to check and refine them.
Most importantly, she said, tech firms need to have safeguards in place so that if an AI does start spewing hate speech, that speech doesn’t reach the human on the other end.
“We should be doing our best to devise models so that they don't produce hateful speech, but we should also be putting software and engineering guardrails around this so that if they do produce something hateful, it doesn't get out to the user,” Felkner said.
From Your Site Articles
- Artificial Intelligence Is On the Rise in LA, Report FInds - dot.LA ›
- Artificial Intelligence Will Change How Doctors Diagnose - dot.LA ›
- LA Emerges as an Early Adopter of Artificial Intelligence - dot.LA ›
- AI Will Soon Begin to Power Influencer Content - dot.LA ›
- Are ChatGPT and Other AI Apps Politically Biased? - dot.LA ›
Related Articles Around the Web
Samson Amore
Samson Amore is a reporter for dot.LA. He holds a degree in journalism from Emerson College. Send tips or pitches to samsonamore@dot.la and find him on Twitter @Samsonamore.
https://twitter.com/samsonamore
samsonamore@dot.la
🤠Musk Picks Texas and 🔥Tinder AI Picks Your Profile Pictures
10:31 AM | July 19, 2024
🔦 Spotlight
Tinder is altering dating profile creation with its new AI-powered Photo Selector feature, designed to help users choose their most appealing dating profile pictures. This innovative tool employs facial recognition technology to curate a set of up to 10 photos from the user's device, streamlining the often time-consuming process of profile setup. To use the feature, users simply take a selfie within the Tinder app and grant access to their camera roll. The AI then analyzes the photos based on factors like lighting and composition, drawing from Tinder's research on what makes an effective profile picture.
The selection process occurs entirely on the user's device, ensuring privacy and data security. Tinder doesn't collect or store any biometric data or photos beyond those chosen for the profile, and the facial recognition data is deleted once the user exits the feature. This new tool addresses a common pain point for users, as Tinder's research shows that young singles typically spend about 25 to 33 minutes selecting a profile picture. By automating this process, Tinder aims to reduce profile creation time and allow users to focus more on making meaningful connections.
In wholly unrelated news, Elon Musk has announced plans to relocate the headquarters of X (formerly Twitter) and SpaceX from California to Texas. SpaceX will move from Hawthorne to Starbase, while X will shift from San Francisco to Austin. Musk cited concerns about aggressive drug users near X's current headquarters and a new California law regarding gender identity notification in schools as reasons for the move. This decision follows Musk's previous relocation of Tesla's headquarters to Texas in 2021.
🤝 Venture Deals
LA Companies
- DOG PPL, a canine social club, raised an undisclosed amount led by Ani.VC. - learn more
LA Venture Funds
- Bonfire Ventures participated in a $20.5M Series A for Alvys, a logistics operating platform. - learn more
- B Capital led a $65M Series D for LevelTen, a Seattle provider of transaction infrastructure for the energy transition. - learn more
- Amboy Street Ventures, a VC fund focused on Sexual Health & Women’s Health Technology is raising up to $50m for its second fund. - learn more
- Mucker Capital participated in a $43M Series B for Octagos Health, a provider of cardiac device monitoring tools. - learn more
- Magnify Ventures participated in a $10.9M Series A for Seven Starling, a virtual maternal behavioral health startup. - learn more
LA Exits
- Penguin Random House agreed to acquire comic book publisher Boom! Studios from backers like Walt Disney Co. - learn more
Read moreShow less
CrowdStrike CEO Says He Regrets Not Firing People Quicker
03:10 PM | March 04, 2020
Ben Bergman/dot.LA
George Kurtz, co-founder and CEO of the cloud-native endpoint security platform CrowdStrike, says executives should be obsessed with culture. Everyone below him must be fanatical about customer success and outcome and if they aren't fitting in, they need to go quickly. It's one of the biggest lessons he's learned as CEO.
"Not one time have I regretted firing someone too fast," Kurtz told a lunchtime crowd at the first day of the Montgomery Summit in Santa Monica. "It's that I waited too long."
Kurtz founded the company in Sunnyvale, CA, in 2011 and it went public last year. He was joined on a panel by John Chambers, the former executive chairman and CEO of Cisco Systems, who said he bought 180 companies during his tenure. But he did not acquire a company that was not a very close cultural fit.
"I walked on one of the bigger acquisitions we were going to do," Chambers said. "Culture is as important as strategy and vision and I did not understand that when I was a young CEO."
Chambers said he was proud of Cisco's 95% employee retention rate when he was CEO, which is well above the industry average. He oversaw a rigorous hiring process to make sure candidates were right.
"If you're not interviewing through 10 people, you're not doing the screening process properly," Chambers said.
If an executive wanted to jump to a competitor, he would try to find out what was at the root of someone's unhappiness. The number one factor: Dissatisfaction with their immediate supervisor.
From Your Site Articles
- Open Raven Data Security Firm Raises $4.1 Million - dot.LA ›
- Open Raven Raises $15M to Keep Data Secure in the Cloud - dot.LA ›
- March Capital Scored a Billion Dollar Return on Crowdstrike - dot.LA ›
Related Articles Around the Web
Read moreShow less
Ben Bergman
Ben Bergman is the newsroom's senior finance reporter. Previously he was a senior business reporter and host at KPCC, a senior producer at Gimlet Media, a producer at NPR's Morning Edition, and produced two investigative documentaries for KCET. He has been a frequent on-air contributor to business coverage on NPR and Marketplace and has written for The New York Times and Columbia Journalism Review. Ben was a 2017-2018 Knight-Bagehot Fellow in Economic and Business Journalism at Columbia Business School. In his free time, he enjoys skiing, playing poker, and cheering on The Seattle Seahawks.
https://twitter.com/thebenbergman
ben@dot.la
RELATEDTRENDING
LA TECH JOBS