Search papers, labs, and topics across Lattice.
This paper introduces a novel optical side-channel attack, IR4Net, that reconstructs screen content from non-contact irradiance measurements. The method addresses instabilities in projection mapping and irreversible compression in light transport by incorporating a physically regularized irradiance approximation and an irreversibility-constrained semantic reprojection module. Experiments across various scenes demonstrate superior reconstruction fidelity and robustness to illumination changes compared to existing neural approaches.
Reconstructing screen content from a distance is now possible with higher fidelity and robustness, thanks to a new physically-guided deep learning approach that overcomes key instabilities in optical projection.
Noncontact exfiltration of electronic screen content poses a security challenge, with side-channel incursions as the principal vector. We introduce an optical projection side-channel paradigm that confronts two core instabilities: (i) the near-singular Jacobian spectrum of projection mapping breaches Hadamard stability, rendering inversion hypersensitive to perturbations; (ii) irreversible compression in light transport obliterates global semantic cues, magnifying reconstruction ambiguity. Exploiting passive speckle patterns formed by diffuse reflection, our Irradiance Robust Radiometric Inversion Network (IR4Net) fuses a Physically Regularized Irradiance Approximation (PRIrr-Approximation), which embeds the radiative transfer equation in a learnable optimizer, with a contour-to-detail cross-scale reconstruction mechanism that arrests noise propagation. Moreover, an Irreversibility Constrained Semantic Reprojection (ICSR) module reinstates lost global structure through context-driven semantic mapping. Evaluated across four scene categories, IR4Net achieves fidelity beyond competing neural approaches while retaining resilience to illumination perturbations.