Search papers, labs, and topics across Lattice.
This paper introduces SIComp, a setup-independent framework for projector compensation that addresses geometric and photometric distortions without requiring fine-tuning or retraining for new projector-camera configurations. They overcome limitations of prior work by constructing a large-scale real-world dataset of 277 distinct setups and employing a co-adaptive design that decouples geometric correction (via optical flow) from photometric compensation (via a novel network with intensity-varying surface priors). Experiments show SIComp significantly outperforms existing methods in generalization ability across diverse unseen setups, providing the first generalizable solution to projector compensation.
Project images onto any surface, under any lighting, from any angle, without recalibrating your projector – SIComp just works.
Projector compensation seeks to correct geometric and photometric distortions that occur when images are projected onto nonplanar or textured surfaces. However, most existing methods are highly setup-dependent, requiring fine-tuning or retraining whenever the surface, lighting, or projector-camera pose changes. Progress has been limited by two key challenges: (1) the absence of large, diverse training datasets and (2) existing geometric correction models are typically constrained by specific spatial setups; without further retraining or fine-tuning, they often fail to generalize directly to novel geometric configurations. We introduce SIComp, the first Setup-Independent framework for full projector Compensation, capable of generalizing to unseen setups without fine-tuning or retraining. To enable this, we construct a large-scale real-world dataset spanning 277 distinct projector-camera setups. SIComp adopts a co-adaptive design that decouples geometry and photometry: A carefully tailored optical flow module performs online geometric correction, while a novel photometric network handles photometric compensation. To further enhance robustness under varying illumination, we integrate intensity-varying surface priors into the network design. Extensive experiments demonstrate that SIComp consistently produces high-quality compensation across diverse unseen setups, substantially outperforming existing methods in terms of generalization ability and establishing the first generalizable solution to projector compensation. The code and dataset are available on our project page: https://hai-bo-li.github.io/SIComp/