Search papers, labs, and topics across Lattice.
This paper introduces MapSR, a prompt-driven framework for land cover map super-resolution that leverages low-resolution labels to enhance coarse maps into high-resolution maps. It extracts class prompts from frozen vision foundation model features using a lightweight linear probe, enabling high-resolution mapping via training-free metric inference and graph-based prediction refinement. Experiments on the Chesapeake Bay dataset demonstrate that MapSR achieves competitive mIoU (59.64%) without any high-resolution labels, while significantly reducing trainable parameters and training time compared to existing weakly supervised methods.
Achieve high-resolution land cover mapping competitive with fully supervised methods, but with zero high-resolution training labels and a 10,000x reduction in trainable parameters.
High-resolution (HR) land-cover mapping is often constrained by the high cost of dense HR annotations. We revisit this problem from the perspective of map super-resolution, which enhances coarse low-resolution (LR) land-cover products into HR maps at the resolution of the input imagery. Existing weakly supervised methods can leverage LR labels, but they typically use them to retrain dense predictors with substantial computational cost. We propose MapSR, a prompt-driven framework that decouples supervision from model training. MapSR uses LR labels once to extract class prompts from frozen vision foundation model features through a lightweight linear probe, after which HR mapping proceeds via training-free metric inference and graph-based prediction refinement. Specifically, class prompts are estimated by aggregating high-confidence HR features identified by the linear probe, and HR predictions are obtained by cosine-similarity matching followed by graph-based propagation for spatial refinement. Experiments on the Chesapeake Bay dataset show that MapSR achieves 59.64% mIoU without any HR labels, remaining competitive with the strongest weakly supervised baseline and surpassing a fully supervised baseline. Notably, MapSR reduces trainable parameters by four orders of magnitude and shortens training time from hours to minutes, enabling scalable HR mapping under limited annotation and compute budgets. The code is available at https://github.com/rikirikirikiriki/MapSR.