Technology

Deezer's AI Music Problem: 75,000 Fake Tracks a Day and What It Means

Deezer receives 75,000 AI-generated tracks daily, representing 44% of uploads but only 1-3% of actual listening. The platform detects and flags synthetic music to prevent fraud, raising questions abou

Martin HollowayPublished 3w ago5 min readBased on 1 source
Reading level
Deezer's AI Music Problem: 75,000 Fake Tracks a Day and What It Means

Deezer's AI Music Problem: 75,000 Fake Tracks a Day and What It Means

Deezer, the French music streaming platform that competes with Spotify and Apple Music, is receiving nearly 75,000 AI-generated tracks every single day. That's about 2 million per month, and it represents 44% of all new music uploaded to the service, according to data released by the company.

Here's the surprising part: despite making up nearly half of all daily uploads, AI-generated music accounts for only 1-3% of actual listening on the platform. That mismatch between what gets uploaded and what people actually listen to is the central story here.

How Deezer Is Catching AI Music

Deezer claims to be the first major streaming service to automatically detect and flag AI-generated content without waiting for user reports or publisher disclosures. The company uses fraud-detection algorithms—software designed to spot suspicious activity—to identify synthetic music. About 85% of AI-generated tracks get flagged as fraudulent and demonetized, meaning the uploader doesn't get paid for streams.

Why does this matter? Because the economics create a tempting incentive for bad actors. Even if most of your AI-generated tracks earn nothing, if a few thousand slip through detection and rack up thousands of plays each, the money adds up quickly. That's doubly true if those streams are also artificially inflated—generated by bots rather than real listeners.

Qobuz, a high-fidelity streaming service, has also implemented AI detection. But Spotify, Apple Music, and Amazon Music have not publicly disclosed whether they're doing the same or what their AI-upload statistics look like. That silence makes it hard for anyone outside those companies to know how widespread the problem actually is.

The Quality Mismatch

The gap between uploads and actual listening is telling. Even though generative AI music has become cheap and easy to produce—you can use free or low-cost tools online—listeners aren't biting. Current AI music generation models produce technically competent but often generic-sounding tracks. Most people prefer music from artists they know or at least recognize.

Analysis: This mirrors what we've seen before when digital tools democratized content creation. When anyone can upload anything, volume skyrockets but quality and audience engagement don't follow automatically. It took humans to discover what they actually wanted to listen to.

The economic incentive for flooding platforms with synthetic music is real, though. Even a tiny payout per stream, multiplied across hundreds of thousands of tracks, can generate real revenue—especially if detection systems miss some of the fraudulent plays.

A Platform Takes a Stance

Deezer's decision to publicly disclose these numbers is unusual. Most streaming platforms treat content moderation details as trade secrets. But rather than banning AI music outright, Deezer is flagging and monitoring it.

That approach makes sense for legitimate uses. AI-generated background music for podcasts, meditation tracks, or ambient soundscapes serve real purposes and don't pretend to be human creations. The problem isn't AI music itself; it's fraud and manipulation.

Worth flagging: There's a legal grey area here that Deezer's flagging system doesn't solve. Many AI music generators train on copyrighted songs. That raises questions about whose intellectual property rights are being respected—issues that go beyond the current detection and flagging approach.

What Happens Next

The sheer volume of AI-generated uploads—75,000 per day—shows that AI music generation has hit an inflection point where it's cheap and easy enough that it's being done at scale. As the models improve and get harder to spot, platforms will face mounting pressure to either increase detection investment or disclose what's happening.

In this author's view: We've been here before with email spam. Detection systems have to keep improving as bad actors adapt. Music platforms will need to continuously upgrade their automated systems as AI generation models get better and more difficult to distinguish from human-created music by ear alone.

Other platforms will likely face pressure—from regulators, from musicians, or from consumer demand—to disclose their own AI-upload statistics and detection capabilities, just as Deezer has done.

The precedent that's forming now, where AI-generated music gets tagged and monitored rather than banned, resembles how platforms already handle remixes, covers, and other derivative content. It preserves legitimate use cases while enabling fraud detection and transparency.

The real question: as AI music improves and becomes indistinguishable from human work, will that consumption rate of 1-3% stay flat, or will listeners start engaging more? The answer will reshape streaming economics and how artists get paid.