Search papers, labs, and topics across Lattice.
The paper introduces EO-VAE, a multi-sensor variational autoencoder designed as a foundational tokenizer for Earth Observation (EO) data, addressing the challenges of diverse sensor specifications and variable spectral channels. EO-VAE employs dynamic hypernetworks to encode and reconstruct flexible channel combinations within a single model, unlike previous approaches that require separate tokenizers for each modality. Experiments on the TerraMesh dataset demonstrate that EO-VAE achieves improved reconstruction fidelity compared to existing TerraMind tokenizers, providing a strong baseline for latent generative modeling in remote sensing.
A single EO-VAE model can now handle the diverse spectral channels of Earth observation data, outperforming modality-specific tokenizers.
State-of-the-art generative image and video models rely heavily on tokenizers that compress high-dimensional inputs into more efficient latent representations. While this paradigm has revolutionized RGB generation, Earth observation (EO) data presents unique challenges due to diverse sensor specifications and variable spectral channels. We propose EO-VAE, a multi-sensor variational autoencoder designed to serve as a foundational tokenizer for the EO domain. Unlike prior approaches that train separate tokenizers for each modality, EO-VAE utilizes a single model to encode and reconstruct flexible channel combinations via dynamic hypernetworks. Our experiments on the TerraMesh dataset demonstrate that EO-VAE achieves superior reconstruction fidelity compared to the TerraMind tokenizers, establishing a robust baseline for latent generative modeling in remote sensing.