Will.i.am Predicts Musicians Will Invest in Training AI to Create Hit Songs
Samson Amore is a reporter for dot.LA. He holds a degree in journalism from Emerson College. Send tips or pitches to samsonamore@dot.la and find him on Twitter @Samsonamore.
This is the web version of dot.LA’s daily newsletter. Sign up to get the latest news on Southern California’s tech, startup and venture capital scene.
At the headquarters of the FYI app in Hollywood on Tuesday night, founder, CEO will.i.am made a bold prediction:
“I think what's going to happen real soon is that instead of going to the studio and making a song to release on Spotify, or Apple Music or [other] streaming platforms, artists are going to go into the studio to train their model,” he said. “And it's their model that they're going to train, because they're training it to their fingerprint to their creative thumbprint.”
Will.i.am launched FYI (Focus Your Ideas) in late May to help creatives to collaborate on projects. The app lets users communicate via voice and video calls, centralize file management and to-dos and most crucially to will.i.am’s mission, is outfitted with an AI that will suggest new ideas.
It might seem surprising to hear a recording artist be so invested in AI at a time when some music producers and the publishing industry writ large are warning against it. But will.i.am has been advocating for the use of AI since 2010. Even going so far as to release his “Imma Be Rockin’ That Body” music video that featured will.i.am showing off an AI music model contained in a briefcase, much to the chagrin of his fellow bandmates.
During the LA Tech Week panel co-hosted by AI LA and moderated by Karen Allen, CEO of Infinite Album, a Los Angeles-based startup that makes AI-generated copyright-free music for Twitch streamers, will.i.am said that instead of buying an AI off-the-shelf to customize based on one or two existing public songs – that’s how we got the fake Drake AI copyright disaster in April – musicians will invest time and energy into training their own LLM based on every available demo, or take of a track.
How would that work? According to the Grammy award-winning recording artist, musicians could feed hundreds of thousands of hours of tape into the AI, which will begin to formulate a clear understanding of the artist’s creative process that can then be replicated by the model to spit out finished songs.
This would entail the musician going into the studio and laying down various tracks in different styles until the AI understood the direction the artist hopes to go – cello music one day, Samba the next, and then punk, he suggested.
Fellow panelist Matthew Adell, a longtime record executive who now runs AI-generated music company SOMMS.AI, said that he works with musicians and library holders who are eager to use the technology now. “These custom models or specific purposes are happening,” Adell said. “We have people who are creating cover versions, or remixes, new genre versions of functional music libraries [using AI].”
Adell added, “the real use currently is non-real-time generation, where you generate a lot of stuff in bulk… [Then] you have effectively a curated A&R person determine the stuff that's usable and beneficial to whatever their value proposition is, and then with the client, you might have someone go and master it or mix it manually.”
While the use of AI in music right now is still in its infancy, other musicians are experimenting with it. In April, several musicians told dot.LA they were using it, mainly for idea generation. At that time, Matt Lara, a musician and product specialist at Native Instruments, said he saw real potential for using AI to streamline the complex mixing and mastering process, allowing him to master several tracks at once with relative ease.
For his part, will.i.am said he’s already been experimenting with the technology, noting he recently worked with a music LLM made by Google.
“I heard a song that I’d written that I’d never wrote, a song I’d never produced that I produced,” will.i.am said. “That was so freakin’ fierce. Every sound was crisp. Every synth was like, Yeah, that's the sound. The bass was like, Yo, that's a right bass. Drums are punchy. The lyrics was like Yo, I would have [written] that myself.”
- Snafu Records Raises $6 Million to Replace Record Executives with AI ›
- Laylo's AI-Enabled Platform Is Reinventing the Music Industry's Fan Engagement Model ›
- Why Producers Remain Divided On Using AI to Make Music ›
- Here's What People Are Saying about LA Tech Week - dot.LA ›
- LA Tech Week: Normalizing Esports Investment Downturns - dot.LA ›
- LA Tech Week: Miguel on AI's Impact on Music and Artists - dot.LA ›
Samson Amore is a reporter for dot.LA. He holds a degree in journalism from Emerson College. Send tips or pitches to samsonamore@dot.la and find him on Twitter @Samsonamore.