Search papers, labs, and topics across Lattice.
The paper introduces Light4D, a training-free framework for 4D video relighting that addresses the challenges of limited training data and temporal consistency, especially under extreme viewpoint changes. Light4D employs Disentangled Flow Guidance to inject lighting control into the latent space while preserving geometry, and uses Temporal Consistent Attention and deterministic regularization to enhance temporal consistency and reduce flickering. Experiments demonstrate Light4D's ability to synthesize consistent 4D videos with competitive temporal consistency and lighting fidelity, even with camera rotations from -90 to 90 degrees.
Key contribution not extracted.
Recent advances in diffusion-based generative models have established a new paradigm for image and video relighting. However, extending these capabilities to 4D relighting remains challenging, due primarily to the scarcity of paired 4D relighting training data and the difficulty of maintaining temporal consistency across extreme viewpoints. In this work, we propose Light4D, a novel training-free framework designed to synthesize consistent 4D videos under target illumination, even under extreme viewpoint changes. First, we introduce Disentangled Flow Guidance, a time-aware strategy that effectively injects lighting control into the latent space while preserving geometric integrity. Second, to reinforce temporal consistency, we develop Temporal Consistent Attention within the IC-Light architecture and further incorporate deterministic regularization to eliminate appearance flickering. Extensive experiments demonstrate that our method achieves competitive performance in temporal consistency and lighting fidelity, robustly handling camera rotations from -90 to 90. Code: https://github.com/AIGeeksGroup/Light4D. Website: https://aigeeksgroup.github.io/Light4D.