Search papers, labs, and topics across Lattice.
This paper addresses the problem of spurious correlations in automatic Mean Opinion Score (MOS) prediction models for AI-generated audio by using domain adversarial training (DAT) to disentangle true quality perception from nuisance factors like dataset-specific acoustic signatures. The authors explore various domain definition strategies, from metadata-driven labels to data-driven clusters, finding that the optimal strategy depends on the specific MOS aspect being evaluated. Their aspect-specific domain strategy effectively mitigates acoustic biases, leading to improved correlation with human ratings and better generalization on unseen generative audio scenarios.
Forget static domain priors: the best way to rate AI-generated audio quality depends on *which* aspect of quality you're measuring.
The rapid proliferation of AI-Generated Content (AIGC) has necessitated robust metrics for perceptual quality assessment. However, automatic Mean Opinion Score (MOS) prediction models are often compromised by data scarcity, predisposing them to learn spurious correlations-- such as dataset-specific acoustic signatures-- rather than generalized quality features. To address this, we leverage domain adversarial training (DAT) to disentangle true quality perception from these nuisance factors. Unlike prior works that rely on static domain priors, we systematically investigate domain definition strategies ranging from explicit metadata-driven labels to implicit data-driven clusters. Our findings reveal that there is no"one-size-fits-all"domain definition; instead, the optimal strategy is highly dependent on the specific MOS aspect being evaluated. Experimental results demonstrate that our aspect-specific domain strategy effectively mitigates acoustic biases, significantly improving correlation with human ratings and achieving superior generalization on unseen generative scenarios.