Search papers, labs, and topics across Lattice.
This paper introduces MediAnnote, an ontology-based framework leveraging deep learning and the Radiology Gamuts Ontology (RGO) for collaborative annotation and retrieval of chest X-rays. The framework was tested on the NIH Chest X-ray dataset with three radiologists, demonstrating improved disease prediction accuracy (F1-score of 0.54, AUC of 0.75) compared to individual radiologists and enhanced abnormality localization with human-in-the-loop integration. User satisfaction with the framework was high.
MediAnnote offers a collaborative, ontology-driven approach to chest X-ray annotation that improves disease prediction accuracy compared to individual radiologist interpretations.
Medical imaging is a cornerstone of healthcare, supporting disease diagnosis, treatment planning, and clinical research. The growing volume of medical imaging data has made image annotation an increasingly complex yet essential step in producing reliable, high-quality labeled datasets for diagnostic, educational, and research purposes. This work presents MediAnnote, an ontology-based framework for medical image annotation and retrieval that combines a deep learning component for pre-annotation with the Radiology Gamuts Ontology (RGO) within a collaborative environment. The framework enables image retrieval along with the corresponding findings and causes, enhancing both educational and clinical applications. MediAnnote outperformed existing annotation systems in a qualitative comparison incorporating all essential components. An experimental study involving three radiologists and the NIH Chest X-ray dataset showed that the model achieved a higher accuracy in disease prediction, with an F1-score of 0.54, an AUC of 0.75, a precision of 0.54, and a recall of 0.53, compared to individual radiologists. In addition, integrating a human-in-the-loop approach improved the precision of abnormality localization. The post-task survey showed high user satisfaction, with an overall mean score of 3.94 out of 5.