Search papers, labs, and topics across Lattice.
The paper introduces DiscoPhon, a multilingual benchmark for unsupervised phoneme discovery using discrete speech units derived from self-supervised models. It evaluates how well these units can be mapped to phoneme inventories across diverse languages using metrics for unit quality, recognition, and segmentation. Experiments with HuBERT and SpidR baselines reveal that while phonemic information is present in the learned units, performance varies significantly across different languages.
Unsupervised phoneme discovery from self-supervised speech models is surprisingly viable, but language-specific challenges remain a significant hurdle.
We introduce DiscoPhon, a multilingual benchmark for evaluating unsupervised phoneme discovery from discrete speech units. DiscoPhon covers 6 dev and 6 test languages, chosen to span a wide range of phonemic contrasts. Given only 10 hours of speech in a previously unseen language, systems must produce discrete units that are mapped to a predefined phoneme inventory, through either a many-to-one or a one-to-one assignment. The resulting sequences are evaluated for unit quality, recognition and segmentation. We provide four pretrained multilingual HuBERT and SpidR baselines, and show that phonemic information is available enough in current models for derived units to correlate well with phonemes, though with variations across languages.