Search papers, labs, and topics across Lattice.
This paper introduces KD-CVG, a knowledge-driven approach to creative video generation (CVG) for advertising, addressing the challenges of ambiguous semantic alignment and inadequate motion adaptability in existing Text-to-Video (T2V) models. KD-CVG leverages a novel Advertising Creative Knowledge Base (ACKB) and incorporates two modules: Semantic-Aware Retrieval (SAR) using graph attention networks and reinforcement learning, and Multimodal Knowledge Reference (MKR) that injects semantic and motion priors into the T2V model. Experiments demonstrate that KD-CVG outperforms state-of-the-art methods by achieving better semantic alignment and more realistic motion in generated videos.
Forget boring ads: this new method uses creative knowledge to generate videos that actually match product features and move realistically.
Creative Generation (CG) leverages generative models to automatically produce advertising content that highlights product features, and it has been a significant focus of recent research. However, while CG has advanced considerably, most efforts have concentrated on generating advertising text and images, leaving Creative Video Generation (CVG) relatively underexplored. This gap is largely due to two major challenges faced by Text-to-Video (T2V) models: (a) \textbf{ambiguous semantic alignment}, where models struggle to accurately correlate product selling points with creative video content, and (b) \textbf{inadequate motion adaptability}, resulting in unrealistic movements and distortions. To address these challenges, we develop a comprehensive Advertising Creative Knowledge Base (ACKB) as a foundational resource and propose a knowledge-driven approach (KD-CVG) to overcome the knowledge limitations of existing models. KD-CVG consists of two primary modules: Semantic-Aware Retrieval (SAR) and Multimodal Knowledge Reference (MKR). SAR utilizes the semantic awareness of graph attention networks and reinforcement learning feedback to enhance the model's comprehension of the connections between selling points and creative videos. Building on this, MKR incorporates semantic and motion priors into the T2V model to address existing knowledge gaps. Extensive experiments have demonstrated KD-CVG's superior performance in achieving semantic alignment and motion adaptability, validating its effectiveness over other state-of-the-art methods. The code and dataset will be open source at https://kdcvg.github.io/KDCVG/.