Why Spotify won’t add an AI music filter
Spotify is avoiding an AI music filter while Deezer and Apple Music move toward labels, detection, and disclosure.

Spotify has no AI music filter because it is avoiding a hard call on how music was made.
Spotify’s missing filter is now a real product gap, not a hypothetical complaint. A Leipzig developer built a browser tool that blocks suspected AI tracks, Deezer says it is already tagging and excluding them, and one industry poll found 97% of listeners could not tell AI songs from human ones.
| Item | What the article says | Why it matters |
|---|---|---|
| Spotify AI Blocker | Built by Cedrik Sixtus in mid-2025 | Shows user demand for filtering |
| Suspected AI artists | More than 4,700 | Signals scale of the problem |
| Listener test result | 97% failed to spot AI tracks | Detection is hard for humans |
| Deezer poll | About 80% wanted clear labels | Labels have broad support |
| EU AI Act | Labeling required from August 2026 | Rules are coming |
Why the button is missing
Get the latest AI news in your inbox
Weekly picks of model releases, tools, and deep dives — no spam, unsubscribe anytime.
No spam. Unsubscribe at any time.
Spotify is choosing restraint over certainty. The company says its priority is stopping spam, impersonation, and other harmful uses of AI, rather than trying to judge every track by how it was created.

That sounds tidy until you look at the product problem. If a song uses AI for vocals, another uses it for a drum stem, and a third is fully generated from a prompt, one label can mislead listeners and artists in different ways.
Robert Prey, who studies streaming platforms at Oxford University’s Internet Institute, called the trade-off “borderline existential” for Spotify. That is a fair read: the company wants to avoid making moral judgments about music production while also keeping listeners from feeling tricked.
- Spotify launched a test in April that shows AI use in song credits.
- The system depends on artists, labels, or distributors disclosing the information.
- Spotify says that is not a complete solution.
- The company has not opened a user-facing AI filter like the one many listeners want.
What listeners are reacting to
The complaint from users is simple: they want choice. Cedrik Sixtus, the software developer behind Spotify AI Blocker, says he wants to decide whether he hears AI music at all. His tool filters a growing list of suspected AI artists using community tracking, odd release patterns, AI-style cover art, and external detection tools.
The list already contains more than 4,700 suspected AI artists, which tells you how quickly the problem has grown from edge case to everyday annoyance. Sixtus also warns that his browser-based tool may violate Spotify’s terms of service, which is a good reminder that user workarounds often appear before platforms act.
“It is about choice – if you want to hear AI music or if you don’t,” says Cedrik Sixtus.
That instinct is showing up elsewhere too. In the Deezer–Ipsos poll cited in the story, around 80% of respondents said AI-generated music should be clearly labeled. People are not asking for a ban. They are asking for visibility.
Deezer and Apple Music are moving faster
Spotify is not the only company dealing with this, and its rivals are already making sharper calls. Deezer says it tags albums that include AI-generated tracks from Suno, Udio, and similar tools, then keeps those tracks out of algorithmic recommendations and human-made playlists.

Deezer also says it uses in-house detection based on statistical patterns in sound, and that it recently began selling that detection tech across the industry. That is a notable move: it turns an internal moderation problem into a product.
Apple Music took a different route in March, introducing transparency tags and saying it would eventually require labels and distributors to disclose AI use in new songs and related content. The catch is obvious. Self-disclosure is only as honest as the people filing it.
- Deezer says it is the only streaming platform with this system in place.
- Apple Music relies on transparency tags and later disclosure rules.
- Spotify has not committed to a comparable listener-facing filter.
- DDEX is working on a broader industry standard for AI disclosures.
Detection is harder than it sounds
One reason Spotify is hesitating is technical. AI music is getting better fast, and the line between fully generated songs and human-made tracks with AI help is getting blurry. Maya Ackerman, who studies AI and computational creativity at Santa Clara University and co-founded WaveAI, says the label question changes once you move from obvious prompt-to-song tools to co-creation tools.
That matters because a musician might use AI for one part of the process, then spend hours shaping the result. At that point, a blunt label can feel unfair, but no label can feel dishonest. Ackerman’s point is that the easy version of the problem is rare.
Bob Sturm, who studies AI’s disruption of music at the KTH Royal Institute of Technology, says detection systems need constant retraining as generation tools improve. He describes that as an “AI music arms race,” which is a blunt phrase, but it fits.
There is also the risk of false positives. If a human artist gets mislabeled as AI, the platform creates a trust problem of its own. Manuel Moussallam, Deezer’s head of research, says the company’s system has kept a low false positive rate so far, but hybrid cases are still under study.
The money question is hard to ignore
There is a business reason this is dragging on. Spotify benefits from keeping recommendations as open as possible, because every extra layer of detection, labeling, or filtering adds cost and friction. It also does not help that AI music can be cheap to generate and cheap to upload at scale.
That matters for royalties too. Spotify says all tracks are delivered by third-party rightsholders and paid from the same revenue pool based on listening share. If AI spam floods the system, it can still distort what gets surfaced, even if the payout math stays the same.
Here is the practical comparison:
- Spotify has a credit-based disclosure test, but no public AI filter.
- Deezer has detection, labeling, and recommendation filtering.
- Apple Music has tags and a disclosure plan, but listener visibility is still unclear.
- The EU AI Act will require some AI content to be labeled from August 2026.
Spotify’s earlier Verified badge move shows the company knows trust is now part of the product, not just a PR issue. The bigger question is whether it will keep treating AI music as a moderation edge case, or give listeners a real control panel before regulators and rivals force the issue. My bet: the first serious version of an AI filter will arrive because of industry standards and EU rules, not because Spotify suddenly decides the problem is simple.
// Related Articles
- [IND]
Circle’s Agent Stack targets machine-speed payments
- [IND]
IREN signs Nvidia AI infrastructure pact
- [IND]
Circle launches Agent Stack for AI payments
- [IND]
Why Nebius’s AI Pivot Is More Real Than Hype
- [IND]
Nvidia backs Corning factories with billions
- [IND]
Why Anthropic and the Gates Foundation should fund AI public goods