Search papers, labs, and topics across Lattice.
CFCS, School of CS, PKU
4
0
5
2
Achieve both long-horizon planning and fine-grained control in robotic imitation learning by predicting action sequences at different frequencies.
Robots can now manipulate objects with greater dexterity and adaptability thanks to a new world model that leverages both vision and high-frequency tactile feedback to predict and react to contact dynamics.
GraspALL achieves 32-44% better garment grasping accuracy in low-light by adaptively fusing RGB and depth data based on a learned illumination intensity reference.
Finally, a robot can reliably pick out that specific shirt from a messy pile, thanks to a new vision-language pipeline that reasons about garment affordances.