Search papers, labs, and topics across Lattice.
This paper introduces a multi-camera view scaling framework to improve the data efficiency and generalization of robot imitation learning policies. The approach generates pseudo-demonstrations by leveraging multiple synchronized camera perspectives during demonstration collection, enriching the training distribution and improving viewpoint invariance. Experiments in simulation and real-world manipulation tasks show significant gains in data efficiency and generalization compared to single-view baselines, especially when combined with camera-space action representations and a multiview action aggregation method.
Get 3x the imitation learning performance from your robot with just a few extra cameras.
The generalization ability of imitation learning policies for robotic manipulation is fundamentally constrained by the diversity of expert demonstrations, while collecting demonstrations across varied environments is costly and difficult in practice. In this paper, we propose a practical framework that exploits inherent scene diversity without additional human effort by scaling camera views during demonstration collection. Instead of acquiring more trajectories, multiple synchronized camera perspectives are used to generate pseudo-demonstrations from each expert trajectory, which enriches the training distribution and improves viewpoint invariance in visual representations. We analyze how different action spaces interact with view scaling and show that camera-space representations further enhance diversity. In addition, we introduce a multiview action aggregation method that allows single-view policies to benefit from multiple cameras during deployment. Extensive experiments in simulation and real-world manipulation tasks demonstrate significant gains in data efficiency and generalization compared to single-view baselines. Our results suggest that scaling camera views provides a practical and scalable solution for imitation learning, which requires minimal additional hardware setup and integrates seamlessly with existing imitation learning algorithms. The website of our project is https://yichen928.github.io/robot_multiview.