Search papers, labs, and topics across Lattice.
This paper investigates the robustness of speech foundation models for speaker diarization across different age groups (children, adults, older adults) using an end-to-end neural diarization framework (EEND-VC). They find significant performance drops when models trained on adult speech are applied to child and older adult conversations in a zero-shot setting. Joint multi-age training improves robustness without sacrificing adult performance, and domain-specific adaptation using the Whisper encoder provides further gains, especially for specific age groups.
Training speaker diarization models solely on adult speech leads to surprisingly poor performance on children and older adults, but a simple multi-age training strategy can fix it.
Speech foundation models have shown strong transferability across a wide range of speech applications. However, their robustness to age-related domain shift in speaker diarization remains underexplored. In this work, we present a cross-lifespan evaluation within a unified end-to-end neural diarization framework (EEND-VC), covering speech samples from conversations involving children, adults, and older adults. We compare models under zero-shot cross-age inference, joint multi-age training, and domain-specific adaptation. Results show substantial performance degradation when models trained on adult-specific speech are applied to child and older-adult conversational data. Moreover, joint multi-age training across different age groups improves robustness without reducing diarization performance in canonical adult conversations, while targeted age group adaptation yields further gains in diarization performance, particularly when using the Whisper encoder.