Search papers, labs, and topics across Lattice.
The paper introduces MusicGen-Stem, a multi-stem autoregressive music generation model capable of generating and editing individual stems (bass, drums, other) and their mixtures. They train specialized compression algorithms for each stem to create parallel token streams and leverage music source separation techniques to train a multi-stream text-to-music language model on a large dataset. The model's conditioning method enables editing of individual stems in existing or generated songs, facilitating iterative composition.
Edit the bassline, drums, or other instruments of any song with this new open-source multi-stem music generation model.
While most music generation models generate a mixture of stems (in mono or stereo), we propose to train a multi-stem generative model with 3 stems (bass, drums and other) that learn the musical dependencies between them. To do so, we train one specialized compression algorithm per stem to tokenize the music into parallel streams of tokens. Then, we leverage recent improvements in the task of music source separation to train a multi-stream text-to-music language model on a large dataset. Finally, thanks to a particular conditioning method, our model is able to edit bass, drums or other stems on existing or generated songs as well as doing iterative composition (e.g. generating bass on top of existing drums). This gives more flexibility in music generation algorithms and it is to the best of our knowledge the first open-source multi-stem autoregressive music generation model that can perform good quality generation and coherent source editing. Code and model weights will be released and samples are available on simonrouard.github.io/musicgenstem.