Inside Deepfake Music and How AI-Generated Sound Might Actually Help Compensate Musicians
Last week, a two-minute track released by a TikTok user under the handle ghostwriter977 that sounded like a new song by Drake and The Weeknd went viral. After surpassing 15 million views on TikTok, the song was later streamed more than 600,000 times. Then, Universal Music Group, who represents the two artists, requested it be taken down from music streamers including Apple Music, Deezer, Spotify and Tidal. The track was also later removed from YouTube and TikTok. Why? Because the song wasn’t actually created by the artists. It was generated by AI software.
To a casual listener, the song, “Heart on My Sleeve” does sound like Drake and The Weeknd, and it promptly illustrated the divide that’s been growing in the music industry over AI’s place in the music industry.. On one side, there’s artists, producers and mixers who are pro-AI, eager to see the potential creative and time-saving benefits of it. And, on the other, there’s a crowd of dedicated, older producers who want nothing to do with it. And accelerating the debate are meddlers like Ghostwriter977, or people on TikTok using AI to make Kanye West sound like he’s singing Adele.
The main argument for using AI in music is that it furthers the creative process. Some proponents also contend that it makes the music field easier for people with traditionally less experience to break into, if they can make a song without needing to know the intricacies of production. And, in the case of this latest Drake spoof, AI songs can jumpstart a person’s career – sometimes, without the world even knowing their name – much quicker than a label could dream of.
“I use it as inspiration to draw the structure to what I'm emulating or what I'm creating, and then create the final product,” said Alec Strasmore, Head of Digital Twins at Reflekt Studios and Post Malone's former assistant. He added that it’s unclear how the law might approach determining if something was written by an AI or not: “you can just generate it, and add your human touch to it at the finish line” to say it wasn’t made by a bot, Strasmore theorized.
Since the song mimicked Drake and the Weeknd but was technically original, it didn’t appear to violate a specific copyright. But opponents contend the central legal issue with any generative AI is whether it is trained on copyrighted material that is then used to produce the end result. The problem, of course, with how most AIs are developed is it can be difficult to trace that process backwards to figure out exactly what the AI learned from.
Generative AI is just beginning to face legal challenges; photo database Getty Images open-source AI art tool Stability AI in February, alleging its AI copied more than 12 million copyrighted images without permission. That case is still pending but it could set a precedent for how other industries, including music, handle AI.
“If it comes from the artist, in the sense of ‘hey, I think that’s wrong, take it down,’ it should be done,” Strasmore said. “If it’s this empire-like approach to being the sole owner of all your favorite artists’ sound and everything, and them coming after these individuals, I feel like is tyrannical and aggressive [and] I think that the artists should speak out about it before the labels come aggressively chasing these consumers and fans and, and creators.”
But to the point about ghostwriters not being paid, Strasmore said that’s where he sees an advantage. By bypassing the middleman – in this case, UMG or any of its myriad imprint labels – and going viral, the artist can reap more of the rewards, because they don’t have to share profits with production staff or a distribution partner. It’s the same strategy of making it big without a label by selling CDs out of the trunk of your car, only modernized.
“I also think it's powerful for the writers and the producers who haven't necessarily received the proper credit or payment throughout the years,” Strasmore noted. “They haven’t been paid in the way the artist gets paid. This is an opportunity for the producer and the artists to really own their music.”
But to be clear, this can go both ways. There’s, in theory, no reason UMG or another label couldn’t train a generative AI on the artist’s music and lyrics for the purpose of putting out a hit album. After all, there’s already a whole subset of musical deepfakes.
Matt Lara, a musician and product specialist at Native Instruments, noted that there’s a lack of regulation limiting how AI can be used in music production, or clarifying copyright issues. But he drew a clear line between inspiration and copyright infringement.
“If I heard a classical song, and then I go make a classic hip-hop song, should I have to credit the original classical artist?” Lara asked. “That's what we do as humans, everything we create and produce comes from inspiration that you gathered from something else.”
UMG for its part claimed that it is both interested in testing new technologies while also protecting its artists’ copyrights.
In a statement, UMG’s senior vice president of communications James Murtagh-Hopkins said UMG owes some of its success to using emerging technologies with its artists’ work. But, Murtagh-Hopkins noted, “the training of generative AI using our artists’ music (which represents both a breach of our agreements and a violation of copyright law) as well as the availability of infringing content created with generative AI on DSPs, begs the question as to which side of history all stakeholders in the music ecosystem want to be on: the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation.”
Murtagh-Hopkins also directly referenced “Heart On My Sleeve,” adding, “these instances demonstrate why platforms have a fundamental legal and ethical responsibility to prevent the use of their services in ways that harm artists. We’re encouraged by the engagement of our platform partners on these issues–as they recognize they need to be part of the solution.”
For his part, Lara said he sees particular utility in using AI to help him master multiple tracks at once.
“The mixing and mastering is definitely one of the most used; it's been used for a while, especially because it's so mathematic,” Lara added. One example he gave was a tool called Landr that uses “AI mastering tools to put your song in and then it does a whole bunch of mathematical calculations based on dynamic range and compression and pretty much spit out how your track sounds.”
Still, Lara said that he’s not certain that AI is good enough to make complex, fully-fledged tracks that aren’t reliant on just a hip-hop beat. But, he added, “that's just because the technology [hasn’t] progressed yet. I do think things will get there.”
- Art Created By Artificial Intelligence Can’t Be Copyrighted, US Agency Rules ›
- Snafu Records Raises $6 Million to Replace Record Executives with AI ›
- Laylo's AI-Enabled Platform Is Reinventing the Music Industry's Fan Engagement Model ›
- The Attention Economy is Ruining Music Discovery ›
- Will.i.am on How AI Will Shape the Future of Music - dot.LA ›
- LA Tech Week: Miguel on AI's Impact on Music and Artists - dot.LA ›