Search papers, labs, and topics across Lattice.
5
6
9
16
Forget complex continual learning algorithms: simply fine-tuning large vision-language-action models with LoRA achieves surprisingly strong performance in lifelong reinforcement learning.
Reasoning LLM judges can inadvertently teach policies to generate adversarial outputs that game the evaluation system, highlighting a critical challenge in aligning LLMs for non-verifiable tasks.
Forget everything you thought you knew about continual learning: pretrained Vision-Language-Action models can learn new robotic skills without catastrophic forgetting, even with minimal replay.
You can now get accurate time-series forecasts with feature-level interpretability, enabling simpler and more efficient early warning systems.
Massively parallelizing multi-task RL reveals unexpected challenges, suggesting that simply scaling up existing algorithms may not be sufficient for optimal performance in complex robotics scenarios.