Search papers, labs, and topics across Lattice.
Ground Reaction Inertial Poser (GRIP) is introduced, a method reconstructing physically plausible human motion using four IMUs and foot pressure data. GRIP uses a digital twin within a physics simulator, controlled by two modules: KinematicsNet estimating poses/velocities from sensor data, and DynamicsNet minimizing the difference between KinematicsNet predictions and the simulated humanoid state. Experiments on a new large-scale dataset, PRISM, demonstrate GRIP's superior performance over IMU-only and IMU-pressure fusion methods in pose accuracy and physical consistency.
By fusing IMU and insole pressure data within a physics simulation, GRIP achieves more physically plausible human motion capture than IMU-only methods.
We propose Ground Reaction Inertial Poser (GRIP), a method that reconstructs physically plausible human motion using four wearable devices. Unlike conventional IMU-only approaches, GRIP combines IMU signals with foot pressure data to capture both body dynamics and ground interactions. Furthermore, rather than relying solely on kinematic estimation, GRIP uses a digital twin of a person, in the form of a synthetic humanoid in a physics simulator, to reconstruct realistic and physically plausible motion. At its core, GRIP consists of two modules: KinematicsNet, which estimates body poses and velocities from sensor data, and DynamicsNet, which controls the humanoid in the simulator using the residual between the KinematicsNet prediction and the simulated humanoid state. To enable robust training and fair evaluation, we introduce a large-scale dataset, Pressure and Inertial Sensing for Human Motion and Interaction (PRISM), that captures diverse human motions with synchronized IMUs and insole pressure sensors. Experimental results show that GRIP outperforms existing IMU-only and IMU-pressure fusion methods across all evaluated datasets, achieving higher global pose accuracy and improved physical consistency.