Search papers, labs, and topics across Lattice.
This paper investigates how well CLIP models understand 360-degree image-text pairs, focusing on "360-degree textual semantics" (explicit format identifiers) and "360-degree visual semantics" (invariance to horizontal circular shifts). They probe CLIP's understanding using keyword manipulation and horizontal circular shifts, finding that CLIP understands textual identifiers but struggles with visual semantics. They then propose a LoRA-based fine-tuning approach to improve invariance to circular shifts, revealing a trade-off between 360-degree understanding and original semantic evaluation performance.
CLIP models, despite their prowess, stumble when understanding 360掳 images, failing to maintain semantic alignment under horizontal circular shifts.
The dream of instantly creating rich 360-degree panoramic worlds from text is rapidly becoming a reality, yet a crucial gap exists in our ability to reliably evaluate their semantic alignment. Contrastive Language-Image Pre-training (CLIP) models, standard AI evaluators, predominantly trained on perspective image-text pairs, face an open question regarding their understanding of the unique characteristics of 360-degree panoramic image-text pairs. This paper addresses this gap by first introducing two concepts: \emph{360-degree textual semantics}, semantic information conveyed by explicit format identifiers, and \emph{360-degree visual semantics}, invariant semantics under horizontal circular shifts. To probe CLIP's comprehension of these semantics, we then propose novel evaluation methodologies using keyword manipulation and horizontal circular shifts of varying magnitudes. Rigorous statistical analyses across popular CLIP configurations reveal that: (1) CLIP models effectively leverage explicit textual identifiers, demonstrating an understanding of 360-degree textual semantics; and (2) CLIP models fail to robustly preserve semantic alignment under horizontal circular shifts, indicating limited comprehension of 360-degree visual semantics. To address this limitation, we propose a LoRA-based fine-tuning framework that explicitly instills invariance to circular shifts. Our fine-tuned models exhibit improved comprehension of 360-degree visual semantics, though with a slight degradation in original semantic evaluation performance, highlighting a fundamental trade-off in adapting CLIP to 360-degree panoramic images. Code is available at https://github.com/littlewhitesea/360Semantics.