Search papers, labs, and topics across Lattice.
This paper introduces Ultrasound-CLIP, a contrastive learning framework tailored for ultrasound image-text understanding, addressing the limitations of applying standard CLIP models to this modality. They construct a large-scale ultrasound image-text dataset (US-365K) and an Ultrasonographic Diagnostic Taxonomy (UDT) to provide structured semantic knowledge. Ultrasound-CLIP incorporates semantic soft labels and semantic loss to refine sample discrimination, along with a heterogeneous graph modality for reasoning over lesion-attribute relations, achieving SOTA results on classification and retrieval tasks.
Ultrasound-CLIP leverages a new large-scale dataset and diagnostic taxonomy to achieve state-of-the-art performance in ultrasound image-text understanding, demonstrating the power of domain-specific pre-training.
Ultrasound imaging is widely used in clinical diagnostics due to its real-time capability and radiation-free nature. However, existing vision-language pre-training models, such as CLIP, are primarily designed for other modalities, and are difficult to directly apply to ultrasound data, which exhibit heterogeneous anatomical structures and diverse diagnostic attributes. To bridge this gap, we construct US-365K, a large-scale ultrasound image-text dataset containing 365k paired samples across 52 anatomical categories. We establish Ultrasonographic Diagnostic Taxonomy (UDT) containing two hierarchical knowledge frameworks. Ultrasonographic Hierarchical Anatomical Taxonomy standardizes anatomical organization, and Ultrasonographic Diagnostic Attribute Framework formalizes nine diagnostic dimensions, including body system, organ, diagnosis, shape, margins, echogenicity, internal characteristics, posterior acoustic phenomena, and vascularity. Building upon these foundations, we propose Ultrasound-CLIP, a semantic-aware contrastive learning framework that introduces semantic soft labels and semantic loss to refine sample discrimination. Moreover, we construct a heterogeneous graph modality derived from UDAF's textual representations, enabling structured reasoning over lesion-attribute relations. Extensive experiments with patient-level data splitting demonstrate that our approach achieves state-of-the-art performance on classification and retrieval benchmarks, while also delivering strong generalization to zero-shot, linear probing, and fine-tuning tasks.