Search papers, labs, and topics across Lattice.
The paper introduces RADAR, a framework that uses retrieval-augmented VLMs to analyze robot policy generalization by comparing test-time evaluation tasks to policy training data. RADAR first retrieves relevant training examples using generalist policy embeddings and then uses VLMs to analyze the evaluation task against the retrieved data, classifying the type of policy generalization required. Experiments demonstrate that VLMs effectively analyze data for generalization, and the retrieval step accurately identifies examples needed for classification, even scaling to large datasets.
RADAR offers a scalable, interpretable framework for understanding robot policy generalization by directly linking test-time performance to the training data, revealing the specific types of generalization required.
Recent work on robot manipulation has advanced policy generalization to novel scenarios. However, it is often difficult to characterize how different evaluation settings actually represent generalization from the training distribution of a given policy. To work towards more precise evaluation of generalization in robotics, we propose RADAR, a scalable framework for directly comparing test-time evaluation tasks to policy training data, to determine what form of policy generalization is required. RADAR consists of a two-stage pipeline: first, retrieval using generalist policy embeddings identifies which training examples are relevant for a given evaluation task. Next, vision-language models (VLMs) analyze the evaluation task against the retrieved data, outputting interpretable analysis on how they compare along a variety of axes, and an overall classification of what type of policy generalization is required. Through controlled experiments, we demonstrate that VLMs are effective at analyzing data for generalization, and that our retrieval step effectively identifies examples needed to make accurate classifications with respect to the training data. Furthermore, we scale RADAR to large-scale datasets, where we observe agreement with human-defined benchmark conditions from prior work. We provide demonstrations at radar-analysis.github.io.