Search papers, labs, and topics across Lattice.
The paper introduces TriDerm, a multimodal framework that learns interpretable wound representations by integrating wound imagery, boundary masks, and expert reports, specifically addressing the challenges of capturing clinically meaningful features for recessive dystrophic epidermolysis bullosa (RDEB). TriDerm adapts visual foundation models using wound-level attention pooling and non-contrastive learning, and employs soft ordinal embeddings (SOE) on LLM-generated text representations. Fusing visual and textual modalities in TriDerm achieves 73.5% agreement with experts in ordinal comparisons of wound similarity, outperforming single-modality foundation models.
Expert ordinal comparisons reveal that fusing vision and language in wound representation learning boosts agreement by 5.6% over unimodal foundation models for a rare genetic skin disorder.
Recessive dystrophic epidermolysis bullosa (RDEB) is a rare genetic skin disorder for which clinicians greatly benefit from finding similar cases using images and clinical text. However, off-the-shelf foundation models do not reliably capture clinically meaningful features for this heterogeneous, long-tail disease, and structured measurement of agreement with experts is challenging. To address these gaps, we propose evaluating embedding spaces with expert ordinal comparisons (triplet judgments), which are fast to collect and encode implicit clinical similarity knowledge. We further introduce TriDerm, a multimodal framework that learns interpretable wound representations from small cohorts by integrating wound imagery, boundary masks, and expert reports. On the vision side, TriDerm adapts visual foundation models to RDEB using wound-level attention pooling and non-contrastive representation learning. For text, we prompt large language models with comparison queries and recover medically meaningful representations via soft ordinal embeddings (SOE). We show that visual and textual modalities capture complementary aspects of wound phenotype, and that fusing both modalities yields 73.5% agreement with experts, outperforming the best off-the-shelf single-modality foundation model by over 5.6 percentage points. We make the expert annotation tool, model code and representative dataset samples publicly available.