Search papers, labs, and topics across Lattice.
This paper introduces a pixel-level visual servoing approach, Pixel2Catch, for catching thrown objects using a single RGB camera, avoiding explicit 3D pose estimation. A heterogeneous multi-agent reinforcement learning framework is proposed, treating the robot arm and multi-fingered hand as separate agents with specialized roles and reward functions. The approach demonstrates successful sim-to-real transfer of the learned catching policies, achieving agile manipulation.
Achieve robust sim-to-real transfer for robotic catching by training a multi-agent system on raw pixel inputs, bypassing the need for explicit 3D pose estimation.
To catch a thrown object, a robot must be able to perceive the object's motion and generate control actions in a timely manner. Rather than explicitly estimating the object's 3D position, this work focuses on a novel approach that recognizes object motion using pixel-level visual information extracted from a single RGB image. Such visual cues capture changes in the object's position and scale, allowing the policy to reason about the object's motion. Furthermore, to achieve stable learning in a high-DoF system composed of a robot arm equipped with a multi-fingered hand, we design a heterogeneous multi-agent reinforcement learning framework that defines the arm and hand as independent agents with distinct roles. Each agent is trained cooperatively using role-specific observations and rewards, and the learned policies are successfully transferred from simulation to the real world.