Search papers, labs, and topics across Lattice.
The paper introduces the Great March 100 (GM-100), a benchmark comprising 100 detail-oriented embodied AI tasks designed to evaluate a wide range of robotic agent capabilities, including long-tail behaviors and diverse interactions. GM-100 addresses the limitations of existing datasets by providing a more systematic and challenging evaluation suite based on human-object interaction primitives and object affordances. Experiments demonstrate that GM-100 tasks are feasible to execute and effectively differentiate the performance of current vision-language-action (VLA) models.
Current robot learning benchmarks may be too simplistic: GM-100 offers 100 new, challenging tasks designed to expose the long-tail behaviors that existing benchmarks miss.
Recently, with the rapid development of robot learning and imitation learning, numerous datasets and methods have emerged. However, these datasets and their task designs often lack systematic consideration and principles. This raises important questions: Do the current datasets and task designs truly advance the capabilities of robotic agents? Do evaluations on a few common tasks accurately reflect the differentiated performance of various methods proposed by different teams and evaluated on different tasks? To address these issues, we introduce the Great March 100 (\textbf{GM-100}) as the first step towards a robot learning Olympics. GM-100 consists of 100 carefully designed tasks that cover a wide range of interactions and long-tail behaviors, aiming to provide a diverse and challenging set of tasks to comprehensively evaluate the capabilities of robotic agents and promote diversity and complexity in robot dataset task designs. These tasks are developed through systematic analysis and expansion of existing task designs, combined with insights from human-object interaction primitives and object affordances. We collect a large amount of trajectory data on different robotic platforms and evaluate several baseline models. Experimental results demonstrate that the GM-100 tasks are 1) feasible to execute and 2) sufficiently challenging to effectively differentiate the performance of current VLA models. Our data and code are available at https://rhos.ai/research/gm-100.