Search papers, labs, and topics across Lattice.
This paper introduces a novel framework for motion-controllable egocentric video generation that uses sparse 3D hand joints as control signals to address limitations of existing methods in handling 3D consistency and occlusions. The core innovation lies in an occlusion-aware control module that extracts features from the reference frame while penalizing unreliable signals from hidden joints and injecting 3D geometric embeddings into the latent space. Experiments demonstrate the framework's superior performance in generating high-fidelity videos with realistic interactions and cross-embodiment generalization compared to state-of-the-art baselines.
Generate realistic egocentric videos with consistent 3D hand articulation, even with severe occlusions, by using sparse 3D hand joints as control signals.
Motion-controllable video generation is crucial for egocentric applications in virtual reality and embodied AI. However, existing methods often struggle to achieve 3D-consistent fine-grained hand articulation. By adopting on 2D trajectories or implicit poses, they collapse 3D geometry into spatially ambiguous signals or over rely on human-centric priors. Under severe egocentric occlusions, this causes motion inconsistencies and hallucinated artifacts, as well as preventing cross-embodiment generalization to robotic hands. To address these limitations, we propose a novel framework that generates egocentric videos from a single reference frame, leveraging sparse 3D hand joints as embodiment-agnostic control signals with clear semantic and geometric structures. We introduce an efficient control module that resolves occlusion ambiguities while fully preserving 3D information. Specifically, it extracts occlusion-aware features from the source reference frame by penalizing unreliable visual signals from hidden joints, and employs a 3D-based weighting mechanism to robustly handle dynamically occluded target joints during motion propagation. Concurrently, the module directly injects 3D geometric embeddings into the latent space to strictly enforce structural consistency. To facilitate robust training and evaluation, we develop an automated annotation pipeline that yields over one million high-quality egocentric video clips paired with precise hand trajectories. Additionally, we register humanoid kinematic and camera data to construct a cross-embodiment benchmark. Extensive experiments demonstrate that our approach significantly outperforms state-of-the-art baselines, generating high-fidelity egocentric videos with realistic interactions and exhibiting exceptional cross-embodiment generalization to robotic hands.