Search papers, labs, and topics across Lattice.
This paper introduces a novel nonlocal variational method for color image restoration that leverages saturation-value (SV) similarity between image patches. By incorporating SV similarity into nonlocal total variation, the method better captures color information compared to traditional methods that operate on individual RGB channels. Experiments demonstrate that the proposed method, solved using a bregmanized operator splitting method, achieves superior performance in visual quality and quantitative metrics like PSNR and SSIM compared to existing techniques.
Color image restoration gets a boost: exploiting saturation-value similarity in nonlocal methods yields significantly better results than relying on individual RGB channels.
In this paper, we propose and develop a novel nonlocal variational technique based on saturation-value similarity for color image restoration. In traditional nonlocal methods, image patches are extracted from red, green and blue channels of a color image directly, and the color information can not be described finely because the patch similarity is mainly based on the grayscale value of independent channel. The main aim of this paper is to propose and develop a novel nonlocal regularization method by considering the similarity of image patches in saturation-value channel of a color image. In particular, we first establish saturation-value similarity based nonlocal total variation by incorporating saturation-value similarity of color image patches into the proposed nonlocal gradients, which can describe the saturation and value similarity of two adjacent color image patches. The proposed nonlocal variational models are then formulated based on saturation-value similarity based nonlocal total variation. Moreover, we design an effective and efficient algorithm to solve the proposed optimization problem numerically by employing bregmanized operator splitting method, and we also study the convergence of the proposed algorithms. Numerical examples are presented to demonstrate that the performance of the proposed models is better than that of other testing methods in terms of visual quality and some quantitative metrics including peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), quaternion structural similarity index (QSSIM) and S-CIELAB color error.