Search papers, labs, and topics across Lattice.
The paper introduces EarthEmbeddingExplorer, a web application facilitating cross-modal retrieval of global satellite images using precomputed Earth embeddings. It addresses the gap between academic Earth observation models and practical tools by providing an accessible interface for querying via natural language, visual examples, and geolocation. The application allows users to derive scientific insights from retrieval results, effectively democratizing access to state-of-the-art models and data.
Querying satellite imagery just got easier: EarthEmbeddingExplorer lets you find images using text, visuals, or location, unlocking insights previously trapped in research papers.
While the Earth observation community has witnessed a surge in high-impact foundation models and global Earth embedding datasets, a significant barrier remains in translating these academic assets into freely accessible tools. This tutorial introduces EarthEmbeddingExplorer, an interactive web application designed to bridge this gap, transforming static research artifacts into dynamic, practical workflows for discovery. We will provide a comprehensive hands-on guide to the system, detailing its cloud-native software architecture, demonstrating cross-modal queries (natural language, visual, and geolocation), and showcasing how to derive scientific insights from retrieval results. By democratizing access to precomputed Earth embeddings, this tutorial empowers researchers to seamlessly transition from state-of-the-art models and data archives to real-world application and analysis. The web application is available at https://modelscope.ai/studios/Major-TOM/EarthEmbeddingExplorer.