Search papers, labs, and topics across Lattice.
Thomas Lord Department of Computer Science, University of Southern California ∗Corresponding author
Stanford HAI2
0
4
Grounding reward learning in natural language rationales makes policies 2x more robust to spurious correlations and distribution shifts.
Learning robotic reward functions from a million trajectories reveals that comparing entire trajectories, not just individual frames, unlocks better generalization and learning from suboptimal data.