Search papers, labs, and topics across Lattice.
This paper investigates the semantic and phonetic information captured by widely used speech tokenizers, which are crucial for connecting speech to LLMs in multimodal systems. Through word-level probing tasks, layerwise representation analysis, and cross-modal alignment metrics (CKA), the authors find that current speech tokenizers primarily encode phonetic information rather than lexical-semantic content. This discrepancy between speech and text-derived semantics can degrade multimodal LLM performance, highlighting a need for improved speech tokenization methods.
Speech tokenizers, despite being crucial for multimodal LLMs, primarily capture phonetic information, creating a semantic mismatch with text-derived semantics that hinders performance.
Speech tokenizers are essential for connecting speech to large language models (LLMs) in multimodal systems. These tokenizers are expected to preserve both semantic and acoustic information for downstream understanding and generation. However, emerging evidence suggests that what is termed"semantic"in speech representations does not align with text-derived semantics: a mismatch that can degrade multimodal LLM performance. In this paper, we systematically analyze the information encoded by several widely used speech tokenizers, disentangling their semantic and phonetic content through word-level probing tasks, layerwise representation analysis, and cross-modal alignment metrics such as CKA. Our results show that current tokenizers primarily capture phonetic rather than lexical-semantic structure, and we derive practical implications for the design of next-generation speech tokenization methods.