Search papers, labs, and topics across Lattice.
This paper introduces a novel visual transformer architecture for remote sensing image analysis that incorporates auxiliary geospatial information to predict health outcomes. The core innovation is a geospatial embedding mechanism that transforms diverse geospatial data into spatially aligned embedding patches, coupled with a guided attention module that dynamically integrates multimodal information. Experiments demonstrate that the proposed framework outperforms existing geospatial foundation models in predicting disease prevalence, indicating improved multimodal geospatial understanding.
By spatially aligning geospatial data with image patches and using a guided attention mechanism, this model significantly boosts the accuracy of disease prevalence prediction from remote sensing imagery.
Visual transformers have driven major progress in remote sensing image analysis, particularly in object detection and segmentation. Recent vision-language and multimodal models further extend these capabilities by incorporating auxiliary information, including captions, question and answer pairs, and metadata, which broadens applications beyond conventional computer vision tasks. However, these models are typically optimized for semantic alignment between visual and textual content rather than geospatial understanding, and therefore are not suited for representing or reasoning with structured geospatial layers. In this study, we propose a novel model that enhances remote sensing imagery processing with guidance from auxiliary geospatial information. Our approach introduces a geospatial embedding mechanism that transforms diverse geospatial data into embedding patches that are spatially aligned with image patches. To facilitate cross-modal interaction, we design a guided attention module that dynamically integrates multimodal information by computing attention weights based on correlations with auxiliary data, thereby directing the model toward the most relevant regions. In addition, the module assigns distinct roles to individual attention heads, allowing the model to capture complementary aspects of the guidance information and improving the interpretability of its predictions. Experimental results demonstrate that the proposed framework outperforms existing pretrained geospatial foundation models in predicting disease prevalence, highlighting its effectiveness in multimodal geospatial understanding.