Search papers, labs, and topics across Lattice.
MOOZY, a patient-first foundation model for computational pathology, was developed by pretraining a slide encoder using masked self-distillation on public slide feature grids, followed by alignment with clinical semantics via a case transformer and multi-task supervision. By explicitly modeling dependencies across slides from the same patient, MOOZY achieves state-of-the-art performance on held-out tasks, outperforming TITAN and PRISM by significant margins in weighted F1, ROC-AUC, and balanced accuracy. The model's parameter efficiency (85.77M parameters) further enhances its practicality.
Patient-level pretraining in computational pathology unlocks surprisingly transferable embeddings, outperforming slide-centric models while being 14x smaller than GigaPath.
Computational pathology needs whole-slide image (WSI) foundation models that transfer across diverse clinical tasks, yet current approaches remain largely slide-centric, often depend on private data and expensive paired-report supervision, and do not explicitly model relationships among multiple slides from the same patient. We present MOOZY, a patient-first pathology foundation model in which the patient case, not the individual slide, is the core unit of representation. MOOZY explicitly models dependencies across all slides from the same patient via a case transformer during pretraining, combining multi-stage open self-supervision with scaled low-cost task supervision. In Stage 1, we pretrain a vision-only slide encoder on 77,134 public slide feature grids using masked self-distillation. In Stage 2, we align these representations with clinical semantics using a case transformer and multi-task supervision over 333 tasks from 56 public datasets, including 205 classification and 128 survival tasks across four endpoints. Across eight held-out tasks with five-fold frozen-feature probe evaluation, MOOZY achieves best or tied-best performance on most metrics and improves macro averages over TITAN by +7.37%, +5.50%, and +7.83% and over PRISM by +8.83%, +10.70%, and +9.78% for weighted F1, weighted ROC-AUC, and balanced accuracy, respectively. MOOZY is also parameter efficient with 85.77M parameters, 14x smaller than GigaPath. These results demonstrate that open, reproducible patient-level pretraining yields transferable embeddings, providing a practical path toward scalable patient-first histopathology foundation models.