Alzheimer’s disease is a progressive neurodegenerative disorder that leads to cognitive decline and requires early and accurate diagnosis to slow disease progression. Magnetic resonance imaging (MRI) is widely used to detect structural brain changes associated with Alzheimer’s disease; however, manual interpretation of MRI scans is time-consuming and subject to observer variability. Deep learning approaches have shown strong potential in automated MRI analysis, but their black-box nature limits clinical trust and interpretability. This study proposes a transfer learning–based deep learning framework for Alzheimer’s disease classification, complemented by explainable artificial intelligence (XAI) techniques to analyze model predictions. A pretrained VGG16 model is employed to classify MRI images into four cognitive impairment categories: no impairment, very mild impairment, mild impairment, and moderate impairment. To enhance transparency, Grad-CAM, Saliency Maps, and Guided Grad-CAM are applied to visualize brain regions that contribute most to model predictions. Experimental results demonstrate that the proposed approach achieves 95.41% accuracy, indicating that a well-balanced network architecture combined with integrated explainability techniques leads to effective, interpretable classification. The visual explanations highlight clinically meaningful brain regions that align with known Alzheimer’s disease–related structural changes. These findings suggest that combining deep transfer learning with explainable artificial intelligence can provide accurate and interpretable decision support for Alzheimer’s disease diagnosis. This study is limited by the use of a single publicly available dataset and two-dimensional MRI slices, which may affect generalizability across clinical environments.
Copyrights © 2026