Music

Spotify Updates AI Music Policies to Fight Spam: What It Means


Spotify has updated its policies to strengthen protections against what it calls the “worst” use cases of AI-generated music on the platform. The announcement comes after months of criticism of the streaming service, due to its lack of policies specifically targeting AI music and the growing prevalence of AI artists on the site, including viral AI act The Velvet Sundown.

This doesn’t mean the streaming service is taking a hardline stance against all AI music. As Spotify’s Head of Music, Charlie Hellman, said in a press briefing: “I want to be clear about one thing. We’re not here to punish artists for using AI authentically and responsibly. We hope that artists’ use of AI production tools will enable them to be more creative than ever, create more content that excites fans and offer the best possible experience on Spotify. But we are here to stop the bad actors who are gaming the system, and we can only benefit from [the] good side of AI if we aggressively protect against the downside.”

Related

Xania Monet

Already, Spotify revealed it has been quietly trying to mitigate the “downside” of AI. In the last year, Spotify stated it had already removed over “75 million spammy tracks” from the site, which violated Spotify guidelines. Some, but not all, of these tracks are believed to be AI-generated.

Spotify’s announcement on Thursday (Sept. 25) is largely an update to its pre-existing rules. Already, the company has targeted negative uses of AI music by cracking down on impersonation, artificial streaming and spam content. Those are still the primary ways that the service is policing what it calls “the worst parts of Gen AI,” but now those rules carry new strength. Below is a breakdown of Spotify’s three new updates:

Music Spam Filter

Its music spam filter has been updated so that when a user is flagged for using common hacks like mass uploads, duplicates, SEO hacks, artificially short tracks and “other forms of slop,” as the company’s press release put it, the tracks and their uploaders will be flagged and the content will no longer be recommended to users. A company spokesperson says this new filter will be rolled out slowly and with caution “to ensure we’re not penalizing the wrong uploaders.”

Just as before, this filter is not specific to AI-generated songs — it targets all forms of spam, whether human-made or AI-generated — but given AI models are able to quickly create new songs in seconds, AI is often used as a way of perpetuating mass uploads on streaming platforms. To experts, spam is seen as detrimental because it is often used to spread out fraudulent or artificial streaming activity over a large number of tracks with the goal of siphoning money away from the royalty pools intended for real artists.

Related

AI Music

One example of this scheme came to light last September, when federal prosecutors indicted a North Carolina musician for allegedly using AI to create “hundreds of thousands” of songs and then using the AI tracks to earn more than $10 million in fraudulent streaming royalties.

Stronger Impersonation Rules

The company has always had a policy barring deceptive content and deepfakes, but now Spotify’s new impersonation policy has been updated to clarify how it handles claims about AI voice clones (and other forms of unauthorized vocal impersonation).

Spotify is also addressing its “content mismatch” problem with new protections. Content mismatch occurs when an unauthorized individual uploads a song to a different artist’s Spotify page without permission in the hopes that it boosts streams for the song. In recent months, the issue has been brought to wider attention by Paul Bender of the band Haitus Kaiyote, who experienced content mismatch with his side project The Sweet Enoughs and took to social media to call out Spotify for it.

Now, Spotify announced that it will be “investing more resources” into the issue “reducing the wait time for review, and enabling artists to report ‘mismatch’ even in the pre-release state.”

Standardized AI Disclosures

The newest addition to Spotify’s AI-related policies is a new integration with DDEX (Digital Data Exchange), which sets global standards for musical metadata, which will allow those uploading music to Spotify to label when and how AI was used in their creative process. So, if a track was written almost entirely by one person, but one specific instrument in the track was generated with AI, the person can then self-report that to Spotify and have that be displayed in its credits on Spotify — or any other participating service — using the DDEX standard. This also works for songwriting, production, vocal, mixing or mastering credits.

“We know the use of AI is going to be a spectrum, with artists and producers incorporating AI in various parts of their creative workflow,” says Sam Duboff, Global Head of Marketing and Policy, Spotify for Artists. “And this industry standard will allow for more accurate, nuanced disclosures. It won’t force tracks into a false binary where a song either has to be categorically AI or not AI at all. Why does this matter? Listeners get clarity, and artists get a consistent way to share their process with listeners across all services without fear that it’ll affect how their work’s promoted. This isn’t about gatekeeping. It’s about building trust across the whole music ecosystem.”

Related

AI music generator poster. Robot with guitar near smartphone with mic. Melodies and songs. Artificial Intelligence and Machine Learning. Landing webpage design. Isometric vector illustration

To date, much of the discussion about generative AI on streaming services has focused on the binary of either fully AI-generated content or human-made content. Deezer, the most vocal of the services about AI music so far, has used its proprietary AI detection tools to find fully AI-generated songs, tag them as such and remove them from algorithms and editorial playlists. SoundCloud prohibits the monetization of songs that are “exclusively generated through AI.” Both platforms do not track or penalize partially AI-driven songs.

To date, this new DDEX AI self-disclosure system has been adopted by Amuse, AudioSalad, Believe, CD Baby, DistroKid, Downtown Artist & Label Services, EMPIRE, Encoding Management Service – EMS GmbH, FUGA, IDOL, Kontor New Media, Labelcamp, NueMeta, Revelator, SonoSuite, Soundrop and Supply Chain.

While the major music companies — including Universal Music Group — have not yet signed on to the new DDEX protocol, a spokesperson for UMG shared their support for Spotify’s updated policies: “We welcome Spotify’s new AI protections as important steps forward consistent with our longstanding Artist Centric principles. We believe AI presents enormous opportunities for both artists and fans, which is why platforms, distributors and aggregators must adopt measures to protect the health of the music ecosystem in order for these opportunities to flourish. These measures include content filtering; checks for infringement across streaming and social platforms; penalty systems for repeat infringers; chain-of-custody certification and name-and-likeness verification. The adoption of these measures would enable artists to reach more fans, have more economic and creative opportunities, and dramatically diminish the sea of noise and irrelevant content that threatens to drown out artists’ voices.”

According to Spotify’s Hellman: “This won’t be the end of the story. AI is evolving fast, but today’s announcements will be a meaningful step forward.”

Billboard VIP Pass





Original Source Link

Related Articles

Back to top button