Search papers, labs, and topics across Lattice.
7
0
13
LLM defenses can achieve a 79% reduction in attack success rate against evolving multi-round attacks by using a stateful, multi-agent cooperative framework.
SFT on interleaved plot-solution data *hurts* geometric reasoning in MLLMs, but a novel RL framework called Faire flips the script to achieve state-of-the-art performance by enforcing causal constraints.
A principled framework for General World Models reveals the limitations of current systems and the architectural requirements for future progress.
VecFormer slashes the computational cost of graph transformers while boosting out-of-distribution generalization by operating attention on quantized "graph tokens" instead of individual nodes.
By framing global state inference as a diffusion process, GlobeDiff enables multi-agent systems to overcome partial observability limitations and achieve superior coordination.
Ditch the slow, iterative zooming during MLLM inference: Region-to-Image Distillation lets you bake those agentic zooming benefits directly into a single forward pass.
Visual reasoning gets a boost: forcing models to "draft" their reasoning in code and render visual proofs dramatically improves performance by bridging the gap between perception and logical structure.