Claim Missing Document
Check
Articles

Found 3 Documents
Search

A novel convolutional neural network architecture for Alzheimer’s disease classification using magnetic resonance imaging data Abuowaida, Suhaila; Mustafa, Zaid; Aburomman, Ahmad; Alshdaifat, Nawaf; Iqtait, Musab
International Journal of Electrical and Computer Engineering (IJECE) Vol 15, No 3: June 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v15i3.pp3519-3526

Abstract

Accurate categorization of Alzheimer’s disease is crucial for medical diagnosis and the development of therapeutic strategies. Deep learning models have shown significant potential in this endeavor; however, they often encounter difficulties due to the intricate and varied characteristics of Alzheimer’s disease. To address this difficulty, we suggest a new and innovative architecture for Alzheimer’s disease classification using magnetic resonance data. This design is named Res-BRNet and combines deep residual and boundary-based convolutional neural networks (CNNs). Res-BRNet utilizes a methodical fusion of boundary-focused procedures within adapted spatial and residual blocks. The spatial blocks retrieve information relating to uniformity, diversity, and boundaries of Alzheimer’s disease, although the residual blocks successfully capture texture differences at both local and global levels. We conducted a performance assessment of Res-BRNet. The Res-BRNet surpassed conventional CNN models, with outstanding levels of accuracy (99.22%). The findings indicate that Res-BRNet has promise as a tool for classifying Alzheimer’s disease, with the ability to enhance the precision and effectiveness of clinical diagnosis and treatment planning
Multi-Class Mangrove Classification Using Transfer Learning with MobileNet-V3 on Multi-Organ Images Sudrajat, Ari; Apnena, Riri Damayanti; Rahayu, Ayu Hendrati; Iqtait, Musab
Jurnal Teknik Informatika (Jutif) Vol. 6 No. 3 (2025): JUTIF Volume 6, Number 3, Juni 2025
Publisher : Informatika, Universitas Jenderal Soedirman

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52436/1.jutif.2025.6.3.4683

Abstract

Mangrove ecosystems are important for coastal protection, biodiversity conservation, and climate change mitigation. However, the accurate identification of mangrove species is very challenging due to the morphological similarities between different species, especially when the species are analyzed based on limited plant organs like leaves or stems. Manual identification methods have traditionally been time-consuming, error-prone, and require expert knowledge. Addressing these issues, this research suggests an automatic classification system based on Deep Learning techniques by leveraging the MobileNet-V3 architecture. The system is based on images of three different plant organs—leaves, stems, and seeds—of five mangrove species: Avicennia marina, Avicennia officinalis, Avicennia rumphiana, Rhizophora mucronata, and Sonneratia alba. Data augmentation techniques such as rotation, shifting, and flipping, as well as sharpness enhancement, were applied in the preprocessing step to enhance data variability and ease model generalization. The model was trained with a carefully selected set of hyperparameters and extensively validated through training and testing steps. The experiment results demonstrated outstanding performance with a training accuracy of 99.88% and perfect precision, recall, and F1-score values of 100%. Furthermore, testing with unseen data confirmed the robustness of the model since all test samples were correctly identified. This research concludes that the MobileNet-V3 architecture offers an effective approach to mangrove species classification and suggests that future work should involve larger and more varied datasets, real-world field environments, and the investigation of ensemble models to further extend the adaptability and scalability of mangrove monitoring systems.
Facial features extraction using active shape model and constrained local model: a comprehensive analysis study Iqtait, Musab; Alqaryouti, Marwan Harb; Sadeq, Ala Eddin; Abuowaida, Suhaila; Issa, Abedalhakeem; Almatarneh, Sattam
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 14, No 5: October 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v14.i5.pp4299-4307

Abstract

Human facial feature extraction plays a critical role in various applications, including biorobotics, polygraph testing, and driver fatigue monitoring. However, many existing algorithms rely on end-to-end models that construct complex classifiers directly from face images, leading to poor interpretability. Additionally, these models often fail to capture dynamic information effectively due to insufficient consideration of respondents' personal characteristics. To address these limitations, this paper evaluates two prominent approaches: the constrained local model (CLM), which accurately extracts facial features depending on patch experts, and the active shape model (ASM), designed to simultaneously extract the appearance and shape of an object. We assess the performance of these models on the MORPH dataset using point to point error as evaluation metrics. Our experimental results demonstrate that the CLM achieves higher accuracy, while the ASM exhibits better efficiency. These findings provide valuable insights for selecting the appropriate model based on specific application requirements.