Search papers, labs, and topics across Lattice.
4
6
7
4
Forget complex continual learning algorithms: simply fine-tuning large vision-language-action models with LoRA achieves surprisingly strong performance in lifelong reinforcement learning.
LLMs can drastically accelerate robot planning in cluttered environments by injecting common-sense priors about object locations and co-occurrences, slashing planning time by up to 72% in real-world experiments.
Factored world models can disentangle the dynamics of multiple interacting entities, leading to more controllable video generation and improved policy learning.
Massively parallelizing multi-task RL reveals unexpected challenges, suggesting that simply scaling up existing algorithms may not be sufficient for optimal performance in complex robotics scenarios.