Search papers, labs, and topics across Lattice.
This paper introduces AILive Mixer, a deep learning system for automatic multitrack music mixing tailored for live performances, addressing the challenges of acoustic bleed and strict latency requirements. The system predicts mono gains for each track in real-time, using a deep learning model trained to handle the acoustic bleed inherent in live recordings. The key result is a zero-latency mixing system capable of producing coherent mixes from live multitrack audio, a novel application of deep learning in music production.
For live music performances, this work achieves zero-latency automatic music mixing using deep learning, a feat previously unachieved due to the challenges of acoustic bleed and synchronization constraints.
In this work, we present a deep learning-based automatic multitrack music mixing system catered towards live performances. In a live performance, channels are often corrupted with acoustic bleeds of co-located instruments. Moreover, audio-visual synchronization is of critical importance thus putting a tight constraint on the audio latency. In this work we primarily tackle these two challenges of handling bleeds in the input channels to produce the music mix with zero latency. Although there have been several developments in the field of automatic music mixing in recent times, most or all previous works focus on offline production for isolated instrument signals and to the best of our knowledge, this is the first end-to-end deep learning system developed for live music performances. Our proposed system currently predicts mono gains for a multitrack input, but its design along with the precedent set in past works, allows for easy adaptation to future work of predicting other relevant music mixing parameters.