Search papers, labs, and topics across Lattice.
The paper introduces In-2-4D, a novel task of generating 4D (3D + motion) interpolations between two single-view images, offering more precise motion control compared to text- or single-image-based 4D generation. To address the challenge of large frame-to-frame motion gaps, the method employs a hierarchical approach using a video interpolation model to identify keyframes and generate smooth fragments, representing each fragment with dynamic 3D Gaussian Splatting (3DGS) guided by the interpolated video frames. The approach enhances temporal consistency through self-attention across timesteps and rigid transformation regularization, merging independently generated 3D motion segments via boundary deformation field interpolation and optimization.
Forget generating 4D from text or a single image – this work lets you create compelling 3D animations by simply specifying the start and end poses in two images.
We pose a new problem, In-2-4D, for generative 4D (i.e., 3D + motion) inbetweening to interpolate two single-view images. In contrast to video/4D generation from only text or a single image, our interpolative task can leverage more precise motion control to better constrain the generation. Given two monocular RGB images representing the start and end states of an object in motion, our goal is to generate and reconstruct the motion in 4D, without making assumptions on the object category, motion type, length, or complexity. To handle such arbitrary and diverse motions, we utilize a foundational video interpolation model for motion prediction. However, large frame-to-frame motion gaps can lead to ambiguous interpretations. To this end, we employ a hierarchical approach to identify keyframes that are visually close to the input states while exhibiting significant motions, then generate smooth fragments between them. For each fragment, we construct a 3D representation of the keyframe using Gaussian Splatting (3DGS). The temporal frames within the fragment guide the motion, enabling their transformation into dynamic 3DGS through a deformation field. To improve temporal consistency and refine the 3D motion, we expand the self-attention of multi-view diffusion across timesteps and apply rigid transformation regularization. Finally, we merge the independently generated 3D motion segments by interpolating boundary deformation fields and optimizing them to align with the guiding video, ensuring smooth and flicker-free transitions. Through extensive qualitative and quantitive experiments as well as a user study, we demonstrate the effectiveness of our method and design choices. Project Page & Source Code: https://in-2-4d.github.io/