Search papers, labs, and topics across Lattice.
This paper introduces ResQu, a super-resolution framework that combines quaternion wavelet preprocessing with latent diffusion models to improve image reconstruction quality. ResQu enhances the conditioning process by dynamically integrating quaternion wavelet embeddings at different denoising stages and leveraging generative priors from Stable Diffusion. Experiments on domain-specific datasets show that ResQu outperforms existing methods in perceptual quality and standard evaluation metrics, particularly at high upscaling factors.
By dynamically integrating quaternion wavelet embeddings into latent diffusion models, ResQu achieves state-of-the-art super-resolution results, outperforming existing methods in both perceptual quality and standard metrics.
Image Super-Resolution is a fundamental problem in computer vision with broad applications spacing from medical imaging to satellite analysis. The ability to reconstruct high-resolution images from low-resolution inputs is crucial for enhancing downstream tasks such as object detection and segmentation. While deep learning has significantly advanced SR, achieving high-quality reconstructions with fine-grained details and realistic textures remains challenging, particularly at high upscaling factors. Recent approaches leveraging diffusion models have demonstrated promising results, yet they often struggle to balance perceptual quality with structural fidelity. In this work, we introduce ResQu a novel SR framework that integrates a quaternion wavelet preprocessing framework with latent diffusion models, incorporating a new quaternion wavelet- and time-aware encoder. Unlike prior methods that simply apply wavelet transforms within diffusion models, our approach enhances the conditioning process by exploiting quaternion wavelet embeddings, which are dynamically integrated at different stages of denoising. Furthermore, we also leverage the generative priors of foundation models such as Stable Diffusion. Extensive experiments on domain-specific datasets demonstrate that our method achieves outstanding SR results, outperforming in many cases existing approaches in perceptual quality and standard evaluation metrics. The code will be available after the revision process.