Search papers, labs, and topics across Lattice.
The paper introduces Quality over Quantity (QoQ), a method for curating robot learning datasets by identifying high-quality demonstrations based on their contribution to reducing loss on validation data. QoQ leverages influence functions to efficiently estimate the impact of individual training samples on model performance, adapting them for robot demonstrations by using maximum influence across validation samples and aggregating influence scores within trajectories. Experiments in simulated and real-world environments demonstrate that QoQ outperforms existing data selection techniques, leading to improved policy performance.
Forget manual labeling: influence functions can automatically surface high-quality robot demonstrations, boosting policy performance by intelligently curating training data.
Learning from demonstrations has emerged as a promising paradigm for end-to-end robot control, particularly when scaled to diverse and large datasets. However, the quality of demonstration data, often collected through human teleoperation, remains a critical bottleneck for effective data-driven robot learning. Human errors, operational constraints, and teleoperator variability introduce noise and suboptimal behaviors, making data curation essential yet largely manual and heuristic-driven. In this work, we propose Quality over Quantity (QoQ), a grounded and systematic approach to identifying high-quality data by defining data quality as the contribution of each training sample to reducing loss on validation demonstrations. To efficiently estimate this contribution, we leverage influence functions, which quantify the impact of individual training samples on model performance. We further introduce two key techniques to adapt influence functions for robot demonstrations: (i) using maximum influence across validation samples to capture the most relevant state-action pairs, and (ii) aggregating influence scores of state-action pairs within the same trajectory to reduce noise and improve data coverage. Experiments in both simulated and real-world settings show that QoQ consistently improves policy performances over prior data selection methods.