Search papers, labs, and topics across Lattice.
This paper introduces a framework for robotic cloth manipulation that integrates PDDL-based symbolic task planning with physics-based simulation and vision-guided real-world control. The system uses PDDL to generate high-level plans, which are then translated into low-level actions for a Franka Emika Panda robot, guided by a CeDiRNet-based vision pipeline for cloth corner detection. Experimental results demonstrate successful Sim2Real transfer of folding tasks from IsaacSim to a real-world setup, showcasing the framework's ability to reliably execute folding tasks.
Achieve reliable robotic cloth folding in the real world by combining PDDL planning with vision-guided control, enabling successful Sim2Real transfer.
Manipulating deformable objects such as textiles remains a significant challenge in robotics due to their complex dynamics, unpredictable configurations, and high-dimensional state space. In this paper, we present an integrated planning and execution framework for robotic cloth manipulation that combines symbolic task planning, physics-based simulation, and vision-guided real-world control. High-level plans are generated using Planning Domain Definition Language (PDDL) and translated into low-level primitive actions executed on a Franka Emika Panda robot. To enable perceptual grounding, we employ a vision-based pipeline using an adapted CeDiRNet model for cloth corner detection and grasp point estimation. The system is first validated in IsaacSim using a particle-based deformable cloth model and then transferred to a real-world setup with consistent performance. Our results demonstrate successful Sim2Real transfer of high-level plans, enabling reliable execution of folding tasks in both simulated and physical environments.