Search papers, labs, and topics across Lattice.
This paper introduces an edge-cloud collaborative framework for Speech Emotion Captioning (SEC) that leverages Uncertainty-Guided Speculative Decoding (UGSD) to balance computational efficiency, accuracy, and privacy. UGSD uses a lightweight edge model to generate caption drafts and selectively offloads high-uncertainty token blocks to a more powerful cloud model for verification. Experiments on the MER2024 benchmark show that UGSD achieves significant BLEU score improvements (up to 62.7%), lower latency (1.4x), and higher token throughput (8.5x) compared to an edge-only model.
Achieve a 62.7% BLEU score boost in speech emotion captioning by offloading only the trickiest parts of the problem to the cloud.
Speech Emotion Captioning (SEC) leverages large audio-language models to generate rich, context-aware affective descriptions from speech. However, real-world deployment remains challenging due to the substantial computational demands on resource-constrained edge devices and the privacy risks of transmitting biometric audio. While smaller audio-language models enable efficient on-device SEC, their limited capacity often weakens subtle paralinguistic modeling and fine-grained affective grounding. We propose an edge-cloud collaborative framework based on Uncertainty-Guided Speculative Decoding (UGSD). A lightweight edge model drafts captions locally, and only high-uncertainty token blocks are selectively escalated to a stronger cloud verifier for validation. Experiments on the MER2024 benchmark demonstrate substantial BLEU improvements up to 62.7%. UGSD further achieves 1.4x lower latency and 8.5x higher token throughput compared to an edge-only model. These results empirically characterize the quality-efficiency-privacy trade-off in deployable SEC systems.