Claim Missing Document
Check
Articles

Found 3 Documents
Search

Facial Expression Recognition Using Fused Features: A Comparison of Deep and Machine Learning Jabbooree, Abbas Issa; Alkaabi, Hussein; Kamber, Ali Nadhim
Journal of Computer Networks, Architecture and High Performance Computing Vol. 7 No. 3 (2025): Articles Research July 2025
Publisher : Information Technology and Science (ITScience)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47709/cnahpc.v7i3.6128

Abstract

Facial expression recognition (FER) is a highly active field with applications in computer vision, human-computer interaction, security, and computer graphics animation. Recent advancements in deep learning and machine learning have increased interest in utilizing these techniques for accurate facial expression classification. This paper presents a comparative study that evaluates the performance of deep learning and machine learning as classifiers in FER systems, specifically after data fusion. Data fusion techniques combine and integrate multiple sources of information, aiming to enhance the overall classification accuracy by extracting two types of features using geometrical and appearance features trained using two types of convolutional neural networks. The feature outputs of these networks are fused to create a final feature vector for the classification process. The study evaluates the performance of deep learning on two benchmark datasets, the extended Cohn-Kanade (CK+) and Oulu-CASIA datasets, to assess the performance of deep learning. As a point of comparison, the traditional machine learning approach based on the support vector machine (SVM) is also evaluated on the same datasets. Performance metrics such as classification accuracy, precision, recall, and F1-score are utilized. The results obtained from the study highlight the strengths and limitations of both deep learning and machine learning techniques when employed as classifiers in FER systems. Notably, the experimental results demonstrate that the deep learning approach significantly outperforms the baseline methods, achieving an increase in recognition accuracy of 5.22% for the CK+ and 3.07% for the Oulu-CASIA dataset.
From Static to Contextual: A Survey of Embedding Advances in NLP Alkaabi, Hussein; Jasim, Ali Kadhim; Darroudi, Ali
PERFECT: Journal of Smart Algorithms Vol. 2 No. 2 (2025): PERFECT: Journal of Smart Algorithms, Article Research July 2025
Publisher : LEMBAGA KAJIAN PEMBANGUNAN PERTANIAN DAN LINGKUNGAN (LKPPL)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62671/perfect.v2i2.77

Abstract

Embedding techniques have been a cornerstone of Natural Language Processing (NLP), enabling machines to represent textual data in a form that captures semantic and syntactic relationships. Over the years, the field has witnessed a significant evolution—from static word embeddings, such as Word2Vec and GloVe, which represent words as fixed vectors, to dynamic, contextualized embeddings like BERT and GPT, which generate word representations based on their surrounding context. This survey provides a comprehensive overview of embedding techniques, tracing their development from early methods to state-of-the-art approaches. We discuss the strengths and limitations of each paradigm, their applications across various NLP tasks, and the challenges they address, such as polysemy and out-of-vocabulary words. Furthermore, we highlight emerging trends, including multimodal embeddings, domain-specific representations, and efforts to mitigate embedding bias. By synthesizing the advancements in this rapidly evolving field, this paper aims to serve as a valuable resource for researchers and practitioners while identifying open challenges and future directions for embedding research in NLP.
Explainable AI for Medical Imaging: A Taxonomy Based on Clinical Task Requirements Kamber, Ali Nadhim; Alkaabi, Hussein; Al-Rekabi, Mohammed; Jasim, Ali Kadhim
PERFECT: Journal of Smart Algorithms Vol. 2 No. 2 (2025): PERFECT: Journal of Smart Algorithms, Article Research July 2025
Publisher : LEMBAGA KAJIAN PEMBANGUNAN PERTANIAN DAN LINGKUNGAN (LKPPL)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62671/perfect.v2i2.115

Abstract

Explainable Artificial Intelligence (XAI) has emerged as a critical enabler for deploying AI-driven medical imaging systems where transparency, trust, and accountability are paramount. However, most current taxonomies of XAI methods categorize techniques based on algorithmic families (e.g., saliency maps, attribution methods), which often fail to reflect the practical requirements of clinical tasks. This paper proposes a novel task-centric taxonomy of XAI in medical imaging that aligns explanation techniques with four key clinical tasks: classification, detection, segmentation, and prognostic assessment. For each task, we analyze how different XAI methods enhance model interpretability, their suitability for clinical decision-making, and their limitations in real-world applications. Our taxonomy aims to provide a practical framework for researchers and practitioners to select appropriate XAI strategies tailored to the specific demands of medical imaging workflows. Furthermore, we highlight the current gaps in task-specific explainability and propose future research directions towards clinically meaningful, task-driven XAI solutions. This work serves as a step towards bridging the gap between technical XAI developments and the functional needs of clinical practice.