Search papers, labs, and topics across Lattice.
This paper introduces a bidirectional training framework for video frame interpolation that enforces temporal cycle consistency by jointly optimizing forward synthesis and backward reconstruction using learnable directional tokens. By ensuring reversibility of generated motion paths, the method reduces motion drift, directional ambiguity, and boundary misalignment, particularly in long-range sequences. Experiments demonstrate state-of-the-art performance in image quality, motion smoothness, and dynamic control without increasing inference cost.
Achieve state-of-the-art video frame interpolation by making your diffusion model predict the past, not just the future.
Video frame interpolation aims to synthesize realistic intermediate frames between given endpoints while adhering to specific motion semantics. While recent generative models have improved visual fidelity, they predominantly operate in a unidirectional manner, lacking mechanisms to self-verify temporal consistency. This often leads to motion drift, directional ambiguity, and boundary misalignment, especially in long-range sequences. Inspired by the principle of temporal cycle-consistency in self-supervised learning, we propose a novel bidirectional framework that enforces symmetry between forward and backward generation trajectories. Our approach introduces learnable directional tokens to explicitly condition a shared backbone on temporal orientation, enabling the model to jointly optimize forward synthesis and backward reconstruction within a single unified architecture. This cycle-consistent supervision acts as a powerful regularizer, ensuring that generated motion paths are logically reversible. Furthermore, we employ a curriculum learning strategy that progressively trains the model from short to long sequences, stabilizing dynamics across varying durations. Crucially, our cyclic constraints are applied only during training; inference requires a single forward pass, maintaining the high efficiency of the base model. Extensive experiments show that our method achieves state-of-the-art performance in imaging quality, motion smoothness, and dynamic control on both 37-frame and 73-frame tasks, outperforming strong baselines while incurring no additional computational overhead.