Search papers, labs, and topics across Lattice.
3
1
7
3
By stitching together the best reasoning steps from multiple diffusion-generated trajectories, this method achieves significant accuracy gains and latency reductions in math and coding tasks, outperforming traditional diffusion models and unified architectures.
Current LLM agents are surprisingly bad at synthesizing information from multiple sources to solve realistic problems, achieving dismal scores on the new DEEPSYNTH benchmark.
Diffusion Language Models are being held back by auto-regressive thinking, and unlocking their true potential requires a complete paradigm shift.