Search papers, labs, and topics across Lattice.
The paper introduces Physics-Constrained Multimodal Data Evaluation (PCMDE), a novel metric designed to evaluate the semantic and structural accuracy of multimodal synthetic images, addressing limitations of existing metrics like BLEU and CLIPScore. PCMDE leverages object detection, vision-language models, and large language models to extract spatial, semantic, and relational information from images. The metric then uses confidence-weighted component fusion and physics-guided reasoning to assess the alignment, position, and consistency of objects within the image, providing a more robust evaluation.
Current image evaluation metrics can't tell if your synthetic data violates the laws of physics, so this paper introduces a new metric that can.
Current state of the art measures like BLEU, CIDEr, VQA score, SigLIP-2 and CLIPScore are often unable to capture semantic or structural accuracy, especially for domain-specific or context-dependent scenarios. For this, this paper proposes a Physics-Constrained Multimodal Data Evaluation (PCMDE) metric combining large language models with reasoning, knowledge based mapping and vision-language models to overcome these limitations. The architecture is comprised of three main stages: (1) feature extraction of spatial and semantic information with multimodal features through object detection and VLMs; (2) Confidence-Weighted Component Fusion for adaptive component-level validation; and (3) physics-guided reasoning using large language models for structural and relational constraints (e.g., alignment, position, consistency) enforcement.