Search papers, labs, and topics across Lattice.
The paper addresses the problem of suboptimal robot motion arising from interface limitations in assistive robotics, where users with motor impairments control high-DoF robots through low-dimensional interfaces. They introduce a trajectory reconstruction algorithm that considers task, environment, and interface constraints to infer the user's intended motion in the robot's full control space from limited demonstrations. Experiments using real-world demonstrations with 2-D joystick and 1-D sip-and-puff interfaces on 7-DoF robotic arms show that the reconstructed trajectories lead to faster and more efficient control policies.
Overcome suboptimal robot learning from limited interfaces by reconstructing trajectories that respect user intent, leading to faster and more efficient robot motions.
Assistive robots offer agency to humans with severe motor impairments. Often, these users control high-DoF robots through low-dimensional interfaces, such as using a 1-D sip-and-puff interface to operate a 6-DoF robotic arm. This mismatch results in having access to only a subset of control dimensions at a given time, imposing unintended and artificial constraints on robot motion. As a result, interface-limited demonstrations embed suboptimal motions that reflect interface restrictions rather than user intent. To address this, we present a trajectory reconstruction algorithm that reasons about task, environment, and interface constraints to lift demonstrations into the robot's full control space. We evaluate our approach using real-world demonstrations of ADL-inspired tasks performed via a 2-D joystick and 1-D sip-and-puff control interface, teleoperating two distinct 7-DoF robotic arms. Analyses of the reconstructed demonstrations and derived control policies show that lifted trajectories are faster and more efficient than their interface-constrained counterparts while respecting user preferences.