Search papers, labs, and topics across Lattice.
4
0
9
LLMs respond to increasingly difficult out-of-distribution inputs by activating sparser representations in their last hidden states, revealing a quantifiable relationship between task difficulty and neural activity.
Video diffusion models can now generate physically plausible 4D worlds thanks to a new pipeline that combines pretraining, supervised fine-tuning, and reinforcement learning.
LLM agents can learn complex, multi-turn tasks far more effectively by explicitly separating planning from execution, using a hierarchical RL approach with carefully designed credit assignment.
Forget retraining: this method adapts diffusion models to new tasks *without any training*, using a clever trick based on Doob's h-transform.