Search papers, labs, and topics across Lattice.
This paper introduces Trajectory-Ranked Instruction Masked Supervision (TRIMS), a novel supervised fine-tuning framework for Masked Diffusion Language Models (MDLMs) that uses an autoregressive teacher model to guide a trajectory-aware masking strategy. TRIMS addresses the train-inference mismatch in DLMs by providing explicit supervision over token reveal order, leading to more effective decoding trajectories. Experiments on math and coding benchmarks demonstrate that TRIMS significantly improves the accuracy-parallelism trade-off compared to standard MDLM training and achieves competitive performance with distillation-based approaches at lower training cost.
Diffusion language models can achieve faster decoding and better accuracy by learning directly from the token reveal order suggested by a lightweight autoregressive teacher, without expensive distillation.
Diffusion language models (DLMs) offer a promising path toward low-latency generation through parallel decoding, but their practical efficiency depends heavily on the decoding trajectory. In practice, this advantage often fails to fully materialize because standard training does not provide explicit supervision over token reveal order, creating a train-inference mismatch that leads to suboptimal decoding behavior. We propose Trajectory-Ranked Instruction Masked Supervision (TRIMS), a simple trajectory-guided supervised fine-tuning framework that injects trajectory supervision into standard Masked Diffusion Language Model (MDLM) training with minimal overhead. Instead of relying on costly DLM-based distillation, TRIMS uses lightweight signals from an autoregressive teacher to guide a trajectory-aware masking strategy, encouraging the model to learn more effective decoding orders. Experiments on LLaDA and Dream across math and coding benchmarks show that TRIMS significantly improves the accuracy-parallelism trade-off over both standard MDLM training and train-free acceleration baselines, while achieving competitive performance with prior distillation-based approaches at substantially lower training cost. Further analysis shows that TRIMS leads to better decoding trajectories, validating the effectiveness of trajectory-guided supervision for DLMs.