Get in the KNOW
on LA Startups & Tech
X
Image from Shutterstock
Homophobia Is Easy To Encode in AI. One Researcher Built a Program To Change That.
Samson Amore
Samson Amore is a reporter for dot.LA. He holds a degree in journalism from Emerson College. Send tips or pitches to samsonamore@dot.la and find him on Twitter @Samsonamore.
Artificial intelligence is now part of our everyday digital lives. We’ve all had the experience of searching for answers on a website or app and finding ourselves interacting with a chatbot. At best, the bot can help navigate us to what we’re after; at worst, we’re usually led to unhelpful information.
But imagine you’re a queer person, and the dialogue you have with an AI somehow discloses that part of your identity, and the chatbot you hit up to ask routine questions about a product or service replies with a deluge of hate speech.
Unfortunately, that isn’t as far-fetched a scenario as you might think. Artificial intelligence (AI) relies on information provided to it to create their decision-making models, which usually reflect the biases of the people creating them and the information it's being fed. If the people programming the network are mainly straight, cisgendered white men, then the AI is likely to reflect this.
As the use of AI continues to expand, some researchers are growing concerned that there aren’t enough safeguards in place to prevent systems from becoming inadvertently bigoted when interacting with users.
Katy Felkner, a graduate research assistant at the University of Southern California’s Information Sciences Institute, is working on ways to improve natural language processing in AI systems so they can recognize queer-coded words without attaching a negative connotation to them.
At a press day for USC’s ISI Sept. 15, Felkner presented some of her work. One focus of hers is large language models, systems she said are the backbone of pretty much all modern language technologies,” including Siri, Alexa—even autocorrect. (Quick note: In the AI field, experts call different artificial intelligence systems “models”).
“Models pick up social biases from the training data, and there are some metrics out there for measuring different kinds of social biases in large language models, but none of them really worked well for homophobia and transphobia,” Felkner explained. “As a member of the queer community, I really wanted to work on making a benchmark that helped ensure that model generated text doesn't say hateful things about queer and trans people.”
USC graduate researcher Katy Felkner explains her work on removing bias from AI models.assets.rbl.ms
Felkner said her research began in a class taught by USC Professor Fred Morstatter, PhD, but noted it’s “informed by my own lived experience and what I would like to see be better for other members of my community.”
To train an AI model to recognize that queer terms aren’t dirty words, Felkner said she first had to build a benchmark that could help measure whether the AI system had encoded homophobia or transphobia. Nicknamed WinoQueer (after Stanford computer scientist Terry Winograd, a pioneer in the field of human-computer interaction design), the bias detection system tracks how often an AI model prefers straight sentences versus queer ones. An example, Felkner said, is if the AI model ignores the sentence “he and she held hands” but flags the phrase “she held hands with her” as an anomaly.
Between 73% and 77% of the time, Felkner said, the AI picks the more heteronormative outcome, “a sign that models tend to prefer or tend to think straight relationships are more common or more likely than gay relationships,” she noted.
To further train the AI, Felkner and her team collected a dataset of about 2.8 million tweets and over 90,000 news articles from 2015 through2021 that include examples of queer people talking about themselves or provide “mainstream coverage of queer issues.” She then began feeding it back to the AI models she was focused on. News articles helped, but weren’t as effective as Twitter content, Felkner said, because the AI learns best from hearing queer people describe their varied experiencesin their own words.
As anthropologist Mary Gray told Forbes last year, “We [LGBTQ people] are constantly remaking our communities. That’s our beauty; we constantly push what is possible. But AI does its best job when it has something static.”
By re-training the AI model, researchers can mitigate its biases and ultimately make it more effective at making decisions.
“When AI whittles us down to one identity. We can look at that and say, ‘No. I’m more than that’,” Gray added.
The consequences of an AI model including bias against queer people could be more severe than a Shopify bot potentially sending slurs, Felkner noted – it could also effect people’s livelihoods.
For example, Amazon scrapped a program in 2018 that used AI to identify top candidates by scanning their resumes. The problem was, the computer models almost only picked men.
“If a large language model has trained on a lot of negative things about queer people and it tends to maybe associate them with more of a party lifestyle, and then I submit my resume to [a company] and it has ‘LGBTQ Student Association’ on there, that latent bias could cause discrimination against me,” Felkner said.
The next steps for WinoQueer, Felkner said, are to test it against even larger AI models. Felkner also said tech companies using AI need to be aware of how implicit biases can affect those systems and be receptive to using programs like hers to check and refine them.
Most importantly, she said, tech firms need to have safeguards in place so that if an AI does start spewing hate speech, that speech doesn’t reach the human on the other end.
“We should be doing our best to devise models so that they don't produce hateful speech, but we should also be putting software and engineering guardrails around this so that if they do produce something hateful, it doesn't get out to the user,” Felkner said.
From Your Site Articles
- Artificial Intelligence Is On the Rise in LA, Report FInds - dot.LA ›
- Artificial Intelligence Will Change How Doctors Diagnose - dot.LA ›
- LA Emerges as an Early Adopter of Artificial Intelligence - dot.LA ›
- AI Will Soon Begin to Power Influencer Content - dot.LA ›
- Are ChatGPT and Other AI Apps Politically Biased? - dot.LA ›
Related Articles Around the Web
Samson Amore
Samson Amore is a reporter for dot.LA. He holds a degree in journalism from Emerson College. Send tips or pitches to samsonamore@dot.la and find him on Twitter @Samsonamore.
https://twitter.com/samsonamore
samsonamore@dot.la
Terms of Misuse?: Breaking Down the Data TikTok Collects on Its U.S. Users
09:00 AM | July 19, 2022
TikTok has come under renewed scrutiny over how it handles U.S. data, with some lawmakers calling for an investigation into the Culver City-based company.
What kind of data does TikTok collect? And should we worry about a potential national security threat when Americans’ data is accessed by employees of ByteDance, TikTok’s Chinese parent company?
To answer these questions, dot.LA reviewed TikTok’s privacy policy and interviewed Thomas Germain, a technology writer for Consumer Reports who specializes in privacy issues.
What Data TikTok Collects
Like other social media giants, TikTok gobbles up a lot of user information. To start, TikTok receives names, ages, phone numbers and emails when people sign up for the service. The app also knows users’ approximate locations and mobile device identifiers, such as IP addresses.
Germain told dot.LA the most valuable info may come from the way users interact with the video sharing app. TikTok is quite good at figuring out peoples’ interests based on the videos or accounts they’ve previously liked or followed. Those insights are useful for advertisers and—potentially—for spreading political messages, Germain noted.
“This vast trove of data that every social media company has—on what people are interested in, what makes them upset, what makes them happy—is incredibly valuable,” he said.
The company’s privacy policy permits TikTok to collect a wide range of additional data, from consumers’ keystroke patterns to biometric info. However, the company says it doesn’t necessarily take in or store all of this. For example, keystroke patterns may be used solely for anti-fraud and spam purposes, according to TikTok. Regarding biometrics, TikTok said editing features may automatically locate a person’s face to apply an effect, but those features do not uniquely identify individuals.
Why U.S. government officials are concerned
TikTok is owned by Beijing-based tech giant ByteDance and China is an economic and foreign policy rival to the U.S. government. With the Chinese Communist Party (CCP) exerting considerable power over the nation’s tech companies, U.S. lawmakers and administration officials contend that TikTok’s Chinese ownership poses a national security risk.
“The CCP has a track record longer than a CVS receipt of conducting business & industrial espionage as well as other actions contrary to U.S. national security, which is what makes it so troubling that [ByteDance] personnel in Beijing are accessing this sensitive and personnel data,” Federal Communications Commissioner Brendan Carr recently said.
TikTok says it has never provided any U.S. user data to the Chinese government, nor would it do so if asked. Additionally, the company recently announced that all of U.S. user traffic is now routed to American software giant Oracle’s servers.
“The TikTok app is not unique in the amount of information it collects, compared to other mobile apps,” the company said.
TikTok is hardly the only company swallowing a lot of data on Americans, from car makers to smart doorbell firms. Consumers’ credit card purchases, contact lists and recent GPS locations are hawked by hundreds, if not thousands, of companies in the so-called data broker industry, Germain noted.
“If the Chinese government wanted it, they could just go out and buy it because it's for sale,” he said. “...I think people, when they're worried about TikTok doing something, they should ask themselves whether they should be worried about American companies doing the same thing.”
Still, Germain said there’s some genuine cause for concern, since China’s government has previously pushed the country’s companies to do its bidding. But to Germain, that concern has less to do with China knowing your phone number and more to do with propaganda.
“The Chinese government could instruct Tiktok to manipulate its algorithm to show people content that promotes the goals of the Chinese government,” Germain said. “That could totally happen and that is something that is of concern. But that does start to move away from questions of data privacy.”
From Your Site Articles
- TikTok Timeline: The Rise and Pause of a Social Video Giant - dot.LA ›
- Report: TikTok Is Hiring to Help Crack Down on Leaks - dot.LA ›
- TikTok May Face $29M Fine in Data Privacy Settlement - dot.LA ›
- Local Politicians Are Using TikTok To Reach Gen Z Voters - dot.LA ›
- Here's How To Safely Start A Social Media Side Hustle - dot.LA ›
- Meet Whalar, The Streamys Choice for Agency of the Year - dot.LA ›
- Here’s What a TikTok Ban Could Look Like - dot.LA ›
- Everything To Know About TikTok's FYP Algorithm - dot.LA ›
- Will American Investors Be China’s Answer to OpenAI? - dot.LA ›
Related Articles Around the Web
Read moreShow less
Christian Hetrick
Christian Hetrick is dot.LA's Entertainment Tech Reporter. He was formerly a business reporter for the Philadelphia Inquirer and reported on New Jersey politics for the Observer and the Press of Atlantic City.
Gen Z Hates Ads—Unless They’re On TikTok. Here’s Why
11:01 AM | April 07, 2023
This is the web version of dot.LA’s daily newsletter. Sign up to get the latest news on Southern California’s tech, startup and venture capital scene.
TikTok is awash with ads. There are microinfluencers pushing products that fit the latest microtrend. There are celebrity influencers launching their skincare brands. Ads that look like they were re-purposed from high-quality videos. And ads that try to mimic casual influencer videos.
For the past few years, marketing agencies have fully shifted their strategies to prioritize TikTok. On the surface, this might seem contrary to what we know about Gen Z, which is that they hate ads. Digital consumer research firm Bulbshare found that 99% of Gen Z skips ads when given a chance, and 74% feel there are too many ads.
But TikTok ads hit differently. A Statista study from March found that 38% of TikTok users are okay with ads in exchange for being able to use the app for free. And 28% of people have bought products promoted by celebrities or influencers, which is 10% higher than other non-TikTok users. Considering that 60% of TikTok users are Gen Z, it’s clear that these percentages reflect young consumers’ habits more than any other demographic.
So what makes TikTok advertising more potent than other methods of reaching consumers?
In short, TikTok ads are so ingrained within the platform’s influencer culture ecosystem it’s nearly impossible to differentiate them from other pieces of content.
For example, an influencer’s get-ready-with-me video might highlight beauty products a creator was paid to promote or shove in completely unrelated products, like Pop-Tarts in a makeup tutorial. Because these videos look identical to many of the non-promotional content on an influencer’s account, paid promotions are indiscernible from those that are unpaid. Even videos created by brands sometimes look like they were filmed by influencers. In fact, the Statistica study found that 15% have difficulty distinguishing ads from unpaid content.
Naturally, with so much success on TikTok, brands have opted to use it as a starting point for new marketing campaigns. According to Glossy, TikTok is now the testing ground to see how video styles, tones and messages are received. Whatever works on TikTok is then re-purposed across other social media platforms, like Snapchat and Instagram.
But some brands are also trying to figure out how to integrate ideas that succeed on TikTok into other platforms. This has led to a particularly awful type of ad where something that was ostensibly filmed for TikTok is presented in the wide-screen format people are used to seeing on YouTube or on TV. Take this Tractor Supply Company ad featuring country music star Lainey Wilson riding a tractor. The company specifically made the ad, which aired during the November premiere of “Yellowstone,” to be “TikTok style” as a way to appear approachable and down-to-earth. In other words, even ads that don’t appear on TikTok are adopting the video-sharing app’s native style.
It’s unclear how successful this transfer of content is, or if someone watching TV is receptive to this video style. But it’s a relatively low-cost test since filming a “lo-fi” video for TikTok and then re-using it across other advertising channels is less expensive than creating unique content for each platform. - Kristin Snyder
From Your Site Articles
- Chinese Spy Balloon Reminds Every Politician To Talk A Lot More About Banning TikTok ›
- TikTok Is Breaking the Traditional Digital Agency. Here’s How. ›
- TikTok’s Latest Ad Strategy: Let Brands Crowdsource Creators ›
- TikTok Is Giving Creators a New Way To Earn Ad Revenue ›
- Why Gen Z is Driving Less Than Past Generations - dot.LA ›
Related Articles Around the Web
Read moreShow less
Kristin Snyder
Kristin Snyder is dot.LA's 2022/23 Editorial Fellow. She previously interned with Tiger Oak Media and led the arts section for UCLA's Daily Bruin.
https://twitter.com/ksnyder_db
RELATEDTRENDING
LA TECH JOBS