Search papers, labs, and topics across Lattice.
This paper investigates the application of Vision Transformers (ViTs) for Alzheimer's Disease (AD) detection using MRI images, aiming to improve classification accuracy and computational efficiency compared to CNNs. The authors fine-tuned Google's vit-base-patch16-224-in21k model on an Alzheimer MRI Disease Classification dataset with four stages of dementia. The study achieves a classification accuracy of 95.55%, demonstrating the potential of ViTs in medical image analysis for AD detection.
Vision Transformers can achieve 95.55% accuracy in classifying Alzheimer's stages from MRI images, outperforming CNNs and opening new avenues for early diagnosis.
Accurate detection of Alzheimer's Disease (AD) through MRIs is integral for early diagnosis and intervention. This paper offers a fresh perspective of Alzheimer's Detection using Vision Transformers (ViTs) for brain MRI images. The study uses an Alzheimer MRI Disease Classification dataset, which categorizes MRI images into four different stages that is Mild Demented, Moderate Demented, Non-Demented and Very Mild Demented. We fine-tune the Vision Transformer model Google's vit-base-patch16-224-in21k, to improve the classification accuracy. Compared to Convolutional Neural Networks (CNNs), the computational efficiency and classification accuracy is enhanced by utilizing ViT's capability to directly handle image patches. The MRI images are pre-processed into RGB formats and are converted into their tensor formats for input into the model. The end result reveals that the Vision Transformer model gets a classification accuracy of 95.55. These results can serve as a benchmark in upcoming research in AD detection and in the demonstration of the effectiveness of the Vision Transformer (ViT) in the medical field. This study emphasizes the capability of ViTs to optimize the accuracy of the detection of AD and highlights the importance of further research to optimize this model.