Search papers, labs, and topics across Lattice.
This paper introduces a method for training deep learning models to segment individual tree crowns in aerial imagery by leveraging pseudo-labels generated from aerial laser scanning (ALS) data. The core innovation lies in enhancing these ALS-derived pseudo-labels using the Segment Anything Model 2 (SAM 2) to improve segmentation accuracy. Experiments demonstrate that models trained with these enhanced pseudo-labels outperform existing general-domain segmentation models on the task of tree crown segmentation, offering a cost-effective alternative to manual annotation.
Ditch manual annotation for tree crown segmentation: this method uses enhanced LiDAR-derived pseudo-labels to train deep learning models that beat general-purpose models.
Mapping individual tree crowns is essential for tasks such as maintaining urban tree inventories and monitoring forest health, which help us understand and care for our environment. However, automatically separating the crowns from each other in aerial imagery is challenging due to factors such as the texture and partial tree crown overlaps. In this study, we present a method to train deep learning models that segment and separate individual trees from RGB and multispectral images, using pseudo-labels derived from aerial laser scanning (ALS) data. Our study shows that the ALS-derived pseudo-labels can be enhanced using a zero-shot instance segmentation model, Segment Anything Model 2 (SAM 2). Our method offers a way to obtain domain-specific training annotations for optical image-based models without any manual annotation cost, leading to segmentation models which outperform any available models which have been targeted for general domain deployment on the same task.