Search papers, labs, and topics across Lattice.
This paper introduces a new dataset of head-and-neck radiotherapy-induced normal tissue injuries (ORN, CE, CRN) to facilitate research on automatic injury segmentation. To address the challenge of limited annotations, they propose a 3D SAM-based progressive prompting framework that incorporates text, dose-guided box, and click prompts for multi-task segmentation. Experiments demonstrate improved segmentation performance across diverse injury types compared to existing methods, especially for small and sparse lesions, using a small-target focus loss.
Achieve reliable segmentation of radiotherapy-induced normal tissue injuries, even with limited data, by intelligently prompting SAM with task-specific text, dose information, and iterative clicks.
Radiotherapy-induced normal tissue injury is a clinically important complication, and accurate segmentation of injury regions from medical images could facilitate disease assessment, treatment planning, and longitudinal monitoring. However, automatic segmentation of these lesions remains largely unexplored because of limited voxel-level annotations and substantial heterogeneity across injury types, lesion size, and imaging modality. To address this gap, we curate a dedicated head-and-neck radiotherapy-induced normal tissue injury dataset covering three manifestations: osteoradionecrosis (ORN), cerebral edema (CE), and cerebral radiation necrosis (CRN). We further propose a 3D SAM-based progressive prompting framework for multi-task segmentation in limited-data settings. The framework progressively incorporates three complementary prompts: text prompts for task-aware adaptation, dose-guided box prompts for coarse localization, and click prompts for iterative refinement. A small-target focus loss is introduced to improve local prediction and boundary delineation for small and sparse lesions. Experiments on ORN, CE, and CRN demonstrate that the proposed method achieves reliable segmentation performance across diverse injury types and outperforms state-of-the-art methods.