Search papers, labs, and topics across Lattice.
Latent-DARM is introduced as a novel framework that bridges discrete diffusion language models (DDLMs) and autoregressive language models (ARMs) by enabling communication in a shared latent space. This allows DDLMs to act as planners and ARMs as executors, leveraging the strengths of both architectures. Experiments on mathematical, scientific, and commonsense reasoning benchmarks demonstrate that Latent-DARM outperforms text-based interfaces and approaches state-of-the-art reasoning performance with significantly lower token usage.
By communicating in a shared latent space, Latent-DARM lets you combine the global planning of diffusion models with the fluency of autoregressive models, boosting reasoning accuracy by up to 14% while slashing token usage.
Most multi-agent systems rely exclusively on autoregressive language models (ARMs) that are based on sequential generation. Although effective for fluent text, ARMs limit global reasoning and plan revision. On the other hand, Discrete Diffusion Language Models (DDLMs) enable non-sequential, globally revisable generation and have shown strong planning capabilities, but their limited text fluency hinders direct collaboration with ARMs. We introduce Latent-DARM, a latent-space communication framework bridging DDLM (planners) and ARM (executors), maximizing collaborative benefits. Across mathematical, scientific, and commonsense reasoning benchmarks, Latent-DARM outperforms text-based interfaces on average, improving accuracy from 27.0% to 36.0% on DART-5 and from 0.0% to 14.0% on AIME2024. Latent-DARM approaches the results of state-of-the-art reasoning models while using less than 2.2% of its token budget. This work advances multi-agent collaboration among agents with heterogeneous models.