Search papers, labs, and topics across Lattice.
DynaVid addresses the challenge of synthesizing realistic, highly dynamic videos by training video diffusion models on synthetic motion data represented as optical flow. By decoupling motion generation from appearance, DynaVid leverages the diversity and control offered by synthetic data without introducing artificial visual artifacts. Experiments on human motion and camera control demonstrate improved realism and controllability compared to existing methods.
Synthetic motion data, when represented as optical flow, unlocks a new level of realism and control in video diffusion models, surpassing the limitations of real-world datasets.
Despite recent progress, video diffusion models still struggle to synthesize realistic videos involving highly dynamic motions or requiring fine-grained motion controllability. A central limitation lies in the scarcity of such examples in commonly used training datasets. To address this, we introduce DynaVid, a video synthesis framework that leverages synthetic motion data in training, which is represented as optical flow and rendered using computer graphics pipelines. This approach offers two key advantages. First, synthetic motion offers diverse motion patterns and precise control signals that are difficult to obtain from real data. Second, unlike rendered videos with artificial appearances, rendered optical flow encodes only motion and is decoupled from appearance, thereby preventing models from reproducing the unnatural look of synthetic videos. Building on this idea, DynaVid adopts a two-stage generation framework: a motion generator first synthesizes motion, and then a motion-guided video generator produces video frames conditioned on that motion. This decoupled formulation enables the model to learn dynamic motion patterns from synthetic data while preserving visual realism from real-world videos. We validate our framework on two challenging scenarios, vigorous human motion generation and extreme camera motion control, where existing datasets are particularly limited. Extensive experiments demonstrate that DynaVid improves the realism and controllability in dynamic motion generation and camera motion control.