Search papers, labs, and topics across Lattice.
3
0
5
RL agents can learn more robust vision-and-language navigation policies by exploring diverse trajectories and comparing their performance, even without expert demonstrations or value networks.
Ditch discrete waypoints: VLA models can now generate smooth, physically plausible robot trajectories by directly regressing continuous action functions.
Achieve stable continual learning without catastrophic forgetting by fixing classifier weights to an Equiangular Tight Frame and aligning features geometrically.