Search papers, labs, and topics across Lattice.
University of North Carolina at Chapel Hill
3
0
6
By decomposing long-horizon manipulation into transport and object-centric interaction, LiLo-VLA achieves state-of-the-art zero-shot generalization and robustness, outperforming end-to-end VLA models by a large margin.
VLAs often ignore your instructions and just do what they've seen before, but a simple "counterfactual comparison" trick can fix it.
Unlock SOTA performance in long-horizon search tasks with REDSearcher, a framework that slashes the cost of training by strategically synthesizing complex tasks and boosting core LLM capabilities *before* reinforcement learning.