Search papers, labs, and topics across Lattice.
This paper introduces SUBTA, a bimanual teleoperation system that integrates learned intention estimation, scene-graph task planning, and context-dependent motion assists to improve human-robot collaboration in structured assembly tasks. A user study (N=12) compared SUBTA against standard teleoperation and motion-support only, demonstrating significant improvements in position and orientation accuracy, as well as reduced mental demand. The system's enhanced visual feedback and predictable interventions led to a more effective and user-friendly teleoperation experience.
Achieve significantly higher accuracy and lower mental demand in bimanual teleoperation by intelligently coupling intention estimation with scene-graph task planning and context-aware motion assistance.
In human-robot collaboration, shared autonomy enhances human performance through precise, intuitive support. Effective robotic assistance requires accurately inferring human intentions and understanding task structures to determine optimal support timing and methods. In this paper, we present SUBTA, a supported teleoperation system for bimanual assembly that couples learned intention estimation, scene-graph task planning, and context-dependent motion assists. We validate our approach through a user study (N=12) comparing standard teleoperation, motion-support only, and SUBTA. Linear mixed-effects analysis revealed that SUBTA significantly outperformed standard teleoperation in position accuracy (p<0.001, d=1.18) and orientation accuracy (p<0.001, d=1.75), while reducing mental demand (p=0.002, d=1.34). Post-experiment ratings indicate clearer, more trustworthy visual feedback and predictable interventions in SUBTA. The results demonstrate that SUBTA greatly improves both effectiveness and user experience in teleoperation.