[IND] 7 min readOraCore Editors

AI music is splitting streaming in real time

Apple, Deezer, Bandcamp, and Suno are drawing hard lines around AI music as labels, lawsuits, and fraud cases pile up.

Share LinkedIn
AI music is splitting streaming in real time

AI music is no longer a weird side project. In March 2026, Apple Music began asking labels and artists to tag AI-made tracks, while Deezer said it had already detected more than 13.4 million AI songs in 2025. That is the number to keep in mind: this is not a niche flood, it is a catalog problem.

The fight is now happening in public. Some services want labels, some want bans, some want detection, and some want to sell the tools that create the music in the first place. If you want to understand where AI music is heading, you have to follow the platforms, the lawsuits, and the fraud cases all at once.

Apple wants AI labels, not guesses

Get the latest AI news in your inbox

Weekly picks of model releases, tools, and deep dives — no spam, unsubscribe anytime.

No spam. Unsubscribe at any time.

Apple’s new “Transparency Tags” system is a very Apple move: it does not ban AI music, but it does ask for disclosure. According to the company’s industry newsletter, the tags cover four buckets: track, composition, artwork, and music video. That matters because AI can touch all of them at once.

AI music is splitting streaming in real time

The track tag is for a sound recording where a material portion was generated by AI tools. The composition tag covers AI-generated lyrics or other compositional elements. Artwork tags apply at the album level, while music video tags cover other AI-generated visual content. Multiple tags can be attached to the same release, which is a practical answer to a messy problem.

Apple is also taking a softer route than a ban, and that choice is telling. It is trying to preserve the streaming catalog while making AI content easier to spot for listeners, rights holders, and distributors. If enough major services copy this model, disclosure could become the default expectation instead of a nice-to-have.

  • Apple Music is asking for voluntary AI disclosures.
  • The system covers track, composition, artwork, and music video metadata.
  • Tags can be combined when one release includes multiple AI-generated elements.
  • No AI use is assumed if a provider does not tag a work.

Detection is becoming a business

Deezer took a different route. Instead of waiting for labels to self-report, it built detection into its own service and later opened that tool to other companies. Deezer says its system can identify AI songs with 99.8 percent accuracy, and it used the tool to detect and tag more than 13.4 million AI songs in 2025.

That number sounds huge because it is huge. It also explains why streaming platforms are getting nervous about recommendation quality, royalty fraud, and catalog spam. If millions of synthetic tracks can be uploaded cheaply, then even a small percentage slipping through can distort playlists and payment systems.

Bandcamp took the hardest line so far. The company banned music and audio generated wholly or in substantial part by AI, and it also blocked AI impersonation of other artists or styles. That puts it on a very different path from Apple and Deezer, which are trying to classify AI music rather than remove it.

  • Deezer says its detector reached 99.8% accuracy.
  • Deezer identified and tagged 13.4 million AI songs in 2025.
  • Bandcamp bans AI-generated music that is wholly or substantially generated by AI.
  • Bandcamp also blocks AI impersonation of artists and styles.

The money problem is already here

The biggest reason this debate matters is not taste. It is money. In November 2025, North Carolina man Michael Smith pleaded guilty to creating hundreds of thousands of AI-generated songs and using bots to stream them billions of times. The Department of Justice said the scheme brought in more than $8 million in royalties.

AI music is splitting streaming in real time

That case changed the conversation because it showed how AI music can be used as a fraud machine, not just a creative tool. A service can tolerate a few novelty tracks. It cannot tolerate an industrial-scale bot farm pumping fake streams into royalty pools.

This is also why labels are getting more aggressive about metadata. If a platform knows what is synthetic, it can decide whether to recommend it, monetize it, or remove it. Without labels, the system becomes a guessing game that rewards whoever uploads the most content fastest.

“The heart of Qobuz is and will remain human,” Qobuz said in its AI charter.

Creators want control, and the tools are getting better

While streaming services argue about labels and bans, the generation tools keep getting more capable. Suno released v5.5 with three notable features: Voices, My Taste, and Custom Models. Voices lets users train the model on their own voice using clean a cappellas, finished tracks, or live recordings from a phone or laptop.

That is a meaningful shift. Earlier AI music tools mostly focused on making a song sound more polished. Suno is now moving toward personal style capture, which is where the ethical questions get sharper. If a model can learn your voice, what stops someone from training it on a singer who never agreed to be part of the dataset?

Google is pushing in a similar direction through Lyria and the Gemini app, where users can generate 30-second tracks from text, images, or video. ElevenLabs has also leaned into commercial music generation with its Eleven Music product and an AI album made to show how artists might use the tool while keeping ownership.

What this means for the next wave of music apps

The industry is splitting into three camps. One camp wants disclosure, like Apple. One wants detection and filtering, like Deezer. One wants a hard ban, like Bandcamp. That split will shape what kinds of music apps can survive, because the same AI song can now be treated as metadata, content, or fraud depending on where it lands.

My bet is that disclosure becomes the baseline for major streaming platforms, while smaller artist-first services keep tightening their rules. The wild card is fraud: every new case like the Michael Smith plea gives platforms more reason to build detection into upload pipelines before the next wave of synthetic tracks gets indexed and paid.

So the question for 2026 is simple: will AI music be treated like a creative format with labels, or like spam that needs to be filtered before it reaches listeners? The answer will decide who gets paid, who gets heard, and how much of the catalog a streaming service is willing to trust.

For more on the business side of AI-generated media, see our coverage of AI content labels on streaming platforms and the copyright fights shaping generative audio.