Search papers, labs, and topics across Lattice.
5
0
9
0
Evolutionary search beats hand-tuned heuristics to find optimal, stage-wise pruning schedules for diffusion models, achieving better speed/quality tradeoffs.
Language agents can achieve more diverse and effective self-reflection by encoding cross-sample reflection patterns directly into model parameters, leading to significant performance gains in reasoning tasks.
LVLMs aren't as robust as you think: a simple tweak to existing black-box attacks can triple the success rate against Claude-4.0 and achieve near-perfect scores against Gemini 2.5 Pro and GPT-5.
Attention sinks, considered essential in autoregressive language models, turn out to be surprisingly prunable in diffusion language models, leading to better efficiency.
Analytical diffusion models can now scale to ImageNet-1K without training, thanks to a clever "Golden Subset" selection strategy that avoids full-dataset scans.