Search papers, labs, and topics across Lattice.
6
0
12
0
LLMs signal their internal certainty during answer decoding through predictable attention patterns on their own reasoning traces.
Even near-perfect planning ability in LLMs doesn't ensure safety in robotic tasks, with the best models still generating dangerous plans almost 30% of the time.
Reconstructing screen content from a distance is now possible with higher fidelity and robustness, thanks to a new physically-guided deep learning approach that overcomes key instabilities in optical projection.
Turns out, telling LLMs *not* to use the answer when generating reverse chain-of-thought reasoning can actually make them *more* reliant on it鈥攂ut a skeleton-guided approach breaks the cycle.
A 3B parameter model now rivals models 10x its size in reasoning, alignment, and agentic tasks, challenging the assumption that bigger is always better.
Robots can now understand and act on ambiguous instructions like "I'm thirsty" in real-time, thanks to a new framework that combines visual reasoning with interactive exploration.