Alzheimer’s disease, spatial and temporal data, CNN-LSTM hybrid model
Abstract
This work explores the utility of multimodal imaging techniques such as Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) in the early detection of Alzheimer’s disease (AD), as well as the importance of machine learning, notably convolutional neural networks (CNNs). In order to enhance the diagnostic performance of brain scans, the study employed a range of models, such as Visual Geometry Group 16 (VGG16), EfficientNetB7, and a hybrid Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM) architecture, to collect both spatial and temporal data. Using datasets from Kaggle and the Alzheimer Disease Neuroimaging Initiative (ADNI), the models achieved very high accuracy. A 98.68% overall accuracy and an F-Score of 98.95% were shown by VGG16 and EfficientNetB7. The receiver operating characteristic (ROC) curve study demonstrated strong discriminating abilities, with an area under the curve (AUC) ranging from 0.80 to 0.83. The CNN-LSTM hybrid model effectively handled long-term dependencies in picture data, leading to better performance. These results show that deep learning has a great deal of promise for aiding in the early diagnosis of AD and providing a useful tool for enhancing diagnostic precision in clinical settings. Subsequent investigations may broaden the dataset and improve algorithms to amplify prediction resilience.