How AI-Made Music is Flooding Streaming Services — and Why Most People Aren't Listening
Music streaming service Deezer reports receiving nearly 75,000 AI-generated songs daily — nearly half of all new uploads — yet these songs account for just 1–3% of actual listening. The platform has b
How AI-Made Music is Flooding Streaming Services — and Why Most People Aren't Listening
Music streaming service Deezer says it receives nearly 75,000 songs created entirely by artificial intelligence every single day. That's almost half of all new music uploaded to the platform, according to data the company released. Over a month, that adds up to more than 2 million AI-generated tracks.
Here's the surprising part: despite making up roughly half of everything new being uploaded, AI-made music accounts for only 1 to 3 percent of what people actually listen to on Deezer. In other words, these songs are flooding the platform, but audiences are ignoring them almost completely.
How Platforms Spot AI Music
Deezer has built software that can automatically identify which songs were created by AI rather than by humans. The company claims to be the first major streaming service to do this independently — without having to ask users or music uploaders to report AI tracks themselves.
The system works by flagging suspicious patterns. When Deezer detects AI-generated music, it marks 85 percent of those streams as fraudulent activity. This stops those tracks from earning money.
The reason this matters: AI makes it possible for bad actors to commit a new kind of fraud. In the past, people would use automated bots to play the same human-made song over and over to artificially inflate its popularity and earning power. Now, someone could use AI to create thousands of songs cheaply, then use bots to play those AI songs repeatedly. It's the same old fraud, but with an AI twist.
Another streaming service called Qobuz has also started flagging AI music. But bigger platforms like Spotify, Apple Music, and Amazon Music haven't publicly said whether they detect AI music or how much of it they're seeing.
The Quality Problem
The fact that AI music makes up 44 percent of uploads but only 1 to 3 percent of actual listening tells us something important: people generally don't want to listen to it yet.
Analysis: This pattern — where something new floods into a system but nobody actually wants it — has happened before with other automated content online. Early on, the quality just isn't there. Current AI music generation tools still have limitations, and most listeners prefer music from artists they know and trust.
There's another reason bad actors flood platforms with AI music, though. Even if each AI song only makes a tiny amount of money per play, creating thousands of them costs almost nothing. If they can slip some fraudulent plays past the detection system, those tiny payments add up quickly across thousands of tracks.
Why Deezer is Being Honest About This
Most streaming services keep their content moderation data secret. They don't tell the public how many fake streams they catch or how much AI music is being uploaded. Deezer broke from that pattern by releasing these numbers publicly.
The company's choice not to ban AI music entirely — instead just flagging it — suggests they think some uses are legitimate. AI-generated music can work well for background soundtracks in podcasts, meditation apps, or ambient music where human creativity isn't the main appeal.
Worth flagging: There's a big unsolved legal problem here. Most AI music generation tools were trained on existing songs, including ones protected by copyright. That creates potential legal trouble that's different from how music sampling or cover songs are normally licensed. This could become a major issue down the road.
The Technology Behind It
Deezer's systems have to process and identify 75,000 AI tracks every single day. That requires serious technological muscle. The detection systems likely work by analyzing the audio itself (using a technique called fingerprinting), examining the song's metadata (like who uploaded it and when), and spotting suspicious patterns in how the music is being played.
The 85 percent fraud detection rate sounds good, but it leaves 15 percent of AI-generated streams undetected. We also don't know how many human-made songs are accidentally flagged as AI-generated — a problem that could hurt legitimate artists.
In this author's view: This resembles something we saw decades ago with email spam. When spam filtering first started, it caught some spam but also blocked legitimate emails. Over time, filters got better as they adapted to new tricks from spammers. Music platforms will probably face the same challenge as AI gets better at sounding human.
Why This is Happening Now
AI music generation didn't exist in a vacuum. For the past 20 years, tools for making music have become cheaper and easier to use. Anyone with a laptop can now produce a professional-sounding song. Streaming services like Deezer made it simple to upload that music to millions of listeners worldwide without traditional gatekeepers — no record labels or radio stations deciding what gets heard.
AI is just the latest step in this democratization trend. Where music used to be filtered by industry professionals making educated guesses about what would sell, now the internet does the filtering instead. Algorithms recommend what listeners might enjoy based on what they've already listened to.
But this scale creates a problem. With 75,000 new tracks arriving daily, it's impossible for humans to listen to everything and decide what's good. Platforms have to rely entirely on automated systems to manage the flood.
What Comes Next
Deezer's decision to publicly report on AI music puts pressure on other platforms to do the same. As AI music generation tools keep improving, expect more transparency demands from regulators and from listeners themselves.
The big question: Will AI-generated music get good enough that people actually want to listen to it at scale? If the technology improves significantly, it could reshape how artists earn money from streaming.
Analysis: Right now, the industry seems to be settling on a middle path: don't ban AI music, but label it clearly and monitor it for fraud. This is similar to how platforms already handle remixes, cover songs, and other derivative content. It preserves legitimate uses while keeping an eye out for abuse.
As AI music improves and becomes harder to distinguish from human-created songs by ear alone, musicians and industry groups will likely push for clear labeling rules. The days of AI music being invisible to listeners are probably numbered.
Key Takeaways
- Deezer receives 75,000 AI-generated songs daily, but people listen to them at extremely low rates
- Streaming platforms are starting to build technology to detect and flag AI music automatically
- There's a financial incentive to flood platforms with cheap AI music and artificially boost its plays
- Current AI music quality isn't compelling enough for most listeners yet
- Unsolved copyright questions remain about how AI training data is used
- The industry is moving toward transparency and monitoring rather than outright bans

