Spotify has removed over 75 million AI-generated “spammy” music tracks from its platform in the past year, intensifying efforts to curb unauthorized AI use of artists’ voices, the company announced Thursday.
The Swedish audio giant plans to strengthen impersonation violation enforcement, introduce a new spam filtering system, and collaborate with partners to label AI-incorporated tracks.“We envision a future where artists control how or if they use AI in their creative process,” Spotify stated on its website.
“We leave creative decisions to artists while protecting them from spam, impersonation, and deception, and increasing transparency for listeners.”This move comes as tech platforms tackle the surge in AI-generated content. While some creators embrace AI tools, others report harm from unauthorized impersonation.
“Spotify’s actions are the right step for artists and preserving platform integrity,” said Rob Enderle, principal analyst at Enderle Group in Bend, Oregon.
Spotify will only permit vocal impersonation with artist consent and aims to shorten content mismatch review times, allowing artists to report issues even before release.
“Unauthorized AI voice cloning exploits artists’ identities, undermines their work, and threatens its integrity,” Spotify said. “Some artists may license their voices to AI projects—that’s their choice. Our role is to ensure that choice remains theirs.”
Experts warn that the rising popularity of AI tools will increase deepfakes and AI-generated content, posing ongoing challenges for tech companies to monitor.

