Search papers, labs, and topics across Lattice.
The paper investigates the quality-exploration trade-off in diffusion language models (dLLMs) when using non-autoregressive decoding. It demonstrates that while low-confidence remasking improves single-sample generation quality by focusing on confident tokens, it simultaneously reduces exploration and limits multi-sample performance. To address this, they propose an Independent Metropolis-Hastings sampler to better balance quality and exploration, achieving improved performance on reasoning benchmarks compared to existing remasking strategies.
Diffusion language models can achieve better reasoning performance by explicitly balancing generation quality and exploration, outperforming methods that prioritize only one.
Diffusion large language models (dLLMs) theoretically permit token decoding in arbitrary order, a flexibility that could enable richer exploration of reasoning paths than autoregressive (AR) LLMs. In practice, however, random-order decoding often hurts generation quality. To mitigate this, low-confidence remasking improves single-sample quality (e.g., Pass@$1$) by prioritizing confident tokens, but it also suppresses exploration and limits multi-sample gains (e.g., Pass@$k$), creating a fundamental quality--exploration dilemma. In this paper, we provide a unified explanation of this dilemma. We show that low-confidence remasking improves a myopic proxy for quality while provably constraining the entropy of the induced sequence distribution. To overcome this limitation, we characterize the optimal distribution that explicitly balances quality and exploration, and develop a simple Independent Metropolis--Hastings sampler that approximately targets this distribution during decoding. Experiments across a range of reasoning benchmarks including MATH500, AIME24/25, HumanEval, and MBPP show that our approach yields better exploration-quality tradeoff than both random and low-confidence remasking.