Search papers, labs, and topics across Lattice.
This paper introduces a generative texture filtering method that leverages the image prior of pre-trained generative models via a novel two-stage fine-tuning strategy. The approach first uses supervised fine-tuning on paired images, followed by reinforcement fine-tuning on unlabeled data guided by a reward function that balances texture removal and structure preservation. Experiments demonstrate that this method significantly outperforms existing techniques, particularly in challenging scenarios.
Generative models can be surprisingly effective for texture filtering when fine-tuned with a two-stage supervised and reinforcement learning approach.
We present a generative method for texture filtering, which exhibits surprisingly good performance and generalizability. Our core idea is to empower texture filtering by taking full advantage of the strong learned image prior of pre-trained generative models. To this end, we propose to fine-tune a pre-trained generative model via a two-stage strategy. Specifically, we first conduct supervised fine-tuning on a very small set of paired images, and then perform reinforcement fine-tuning on a large-scale unlabeled dataset under the guidance of a reward function that quantifies the quality of texture removal and structure preservation. Extensive experiments show that our method clearly outperforms previous methods, and is effective to deal with previously challenging cases. Our code is available at https://github.com/OnlyZZZZ/Generative_Texture_Filtering.