Search papers, labs, and topics across Lattice.
This paper introduces AudioKV, a KV cache eviction framework tailored for Large Audio-Language Models (LALMs) that prioritizes audio-critical attention heads using semantic-acoustic alignment. The method identifies these specialized heads by analyzing attention scores in ASR tasks and dynamically allocating KV cache budgets. They also introduce Spectral Score Smoothing (SSS), an FFT-based filtering strategy to improve token selection. AudioKV achieves significantly better performance than existing KV cache compression techniques, maintaining near-full accuracy at a 40% compression ratio on models like Qwen3-Omni-30B.
Audio-specific KV cache eviction lets you compress LALMs by 40% with almost no accuracy loss, while generic methods fall apart.
Large Audio-Language Models (LALMs) have set new benchmarks in speech processing, yet their deployment is hindered by the memory footprint of the Key-Value (KV) cache during long-context inference. While general KV cache compression techniques excel in LLMs, they often fail in the audio domain by overlooking the intrinsic temporal continuity of acoustic signals. To bridge this gap, we propose AudioKV, a novel framework that robustly prioritizes audio-critical attention heads through a hardware-friendly semantic-acoustic alignment mechanism. Specifically, we identify these modality-specialized heads by analyzing attention scores in ASR tasks and dynamically allocate KV cache budgets preferentially to them. Furthermore, we introduce Spectral Score Smoothing (SSS), an FFT-based global filtering strategy designed to suppress high-frequency noise and recover smooth global trends from importance scores, ensuring more balanced token selection with unprecedented precision. Extensive evaluations across multiple LALMs, including Qwen and Gemma series, demonstrate that AudioKV significantly outperforms baselines while enhancing computational efficiency. Notably, at a 40% compression ratio, AudioKV maintains near-full accuracy on Qwen3-Omni-30B with only a 0.45% drop, whereas traditional methods suffer from catastrophic performance degradation and repetition. Our code will be released after acceptance.