Search papers, labs, and topics across Lattice.
This paper addresses the problem of coarse reward credit assignment in reinforcement learning for visual generation, where multiple reward models capture heterogeneous objectives. They propose Objective-aware Trajectory Credit Assignment (OTCA), a framework that decomposes reward at the trajectory level to estimate the importance of denoising steps and adaptively weights multiple reward signals. Experiments demonstrate that OTCA improves image and video generation quality by converting coarse reward supervision into a structured, timestep-aware training signal.
Stop blasting your diffusion models with a single, static reward signal: fine-grained credit assignment across denoising steps and objectives unlocks better image and video generation.
Reinforcement learning, particularly Group Relative Policy Optimization (GRPO), has emerged as an effective framework for post-training visual generative models with human preference signals. However, its effectiveness is fundamentally limited by coarse reward credit assignment. In modern visual generation, multiple reward models are often used to capture heterogeneous objectives, such as visual quality, motion consistency, and text alignment. Existing GRPO pipelines typically collapse these rewards into a single static scalar and propagate it uniformly across the entire diffusion trajectory. This design ignores the stage-specific roles of different denoising steps and produces mistimed or incompatible optimization signals. To address this issue, we propose Objective-aware Trajectory Credit Assignment (OTCA), a structured framework for fine-grained GRPO training. OTCA consists of two key components. Trajectory-Level Credit Decomposition estimates the relative importance of different denoising steps. Multi-Objective Credit Allocation adaptively weights and combines multiple reward signals throughout the denoising process. By jointly modeling temporal credit and objective-level credit, OTCA converts coarse reward supervision into a structured, timestep-aware training signal that better matches the iterative nature of diffusion-based generation. Extensive experiments show that OTCA consistently improves both image and video generation quality across evaluation metrics.