Search papers, labs, and topics across Lattice.
The paper introduces G-STAR, an end-to-end system for timestamped speaker-attributed ASR in long-form, multi-party speech, addressing the limitations of previous Speech-LLM systems in capturing fine-grained temporal boundaries and cross-chunk identity linking. G-STAR couples a time-aware speaker-tracking module with a Speech-LLM transcription backbone, providing structured speaker cues with temporal grounding to the LLM. Experiments demonstrate the effectiveness of G-STAR through cue fusion analysis, local versus long-context trade-offs, and hierarchical objectives.
G-STAR tackles long-form, multi-speaker ASR by giving Speech-LLMs time-aware speaker tracking, enabling robust identity linking across chunks.
We study timestamped speaker-attributed ASR for long-form, multi-party speech with overlap, where chunk-wise inference must preserve meeting-level speaker identity consistency while producing time-stamped, speaker-labeled transcripts. Previous Speech-LLM systems tend to prioritize either local diarization or global labeling, but often lack the ability to capture fine-grained temporal boundaries or robust cross-chunk identity linking. We propose G-STAR, an end-to-end system that couples a time-aware speaker-tracking module with a Speech-LLM transcription backbone. The tracker provides structured speaker cues with temporal grounding, and the LLM generates attributed text conditioned on these cues. G-STAR supports both component-wise optimization and joint end-to-end training, enabling flexible learning under heterogeneous supervision and domain shift. Experiments analyze cue fusion, local versus long-context trade-offs and hierarchical objectives.