Claim Missing Document
Check
Articles

Found 3 Documents
Search

Cuckoo algorithm with great deluge local-search for feature selection problems Khalil Alsmadi, Mutasem; Alzaqebah, Malek; Jawarneh, Sana; Brini, Sami; Al-Marashdeh, Ibrahim; Briki, Khaoula; Alrefai, Nashat; Ali Alghamdi, Fahad; Al-Rashdan, Maen T.
International Journal of Electrical and Computer Engineering (IJECE) Vol 12, No 4: August 2022
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v12i4.pp4315-4326

Abstract

Feature selection problem is concerned with searching in a dataset for a set of features aiming to reduce the training time and enhance the accuracy of a classification method. Therefore, feature selection algorithms are proposed to choose important features from large and complex datasets. The cuckoo search (CS) algorithm is a type of natural-inspired optimization algorithms and is widely implemented to find the optimum solution for a specified problem. In this work, the cuckoo search algorithm is hybridized with a local search algorithm to find a satisfactory solution for the problem of feature selection. The great deluge (GD) algorithm is an iterative search procedure, that can accept some worse moves to find better solutions for the problem, also to increase the exploitation ability of CS. The comparison is also provided to examine the performance of the proposed method and the original CS algorithm. As result, using the UCI datasets the proposed algorithm outperforms the original algorithm and produces comparable results compared with some of the results from the literature.
A hybrid DMO-CNN-LSTM framework for feature selection and diabetes prediction: a deep learning perspective Alsmadi, Mutasem K.; Jaradat, Ghaith M.; Alsallak, Tariq; Alzaqebah, Malek; Jawarneh, Sana; Alfagham, Hayat; Alqurni, Jehad; Badawi, Usama A.; Almusfar, Latifa Abdullah
International Journal of Electrical and Computer Engineering (IJECE) Vol 15, No 6: December 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v15i6.pp5555-5569

Abstract

The early and accurate prediction of diabetes mellitus remains a significant challenge in clinical decision-making due to the high dimensionality, noise, and heterogeneity of medical data. This study proposes a novel hybrid classification framework that integrates the dwarf mongoose optimization (DMO) algorithm for feature selection with a convolutional neural network–long short-term memory (CNN-LSTM) deep learning architecture for predictive modeling. The DMO algorithm is employed to intelligently select the most informative subset of features from a large-scale diabetes dataset collected from 130 U.S. hospitals over a 10-year period. These optimized features are then processed by the CNN-LSTM model, which combines spatial pattern recognition and temporal sequence learning to enhance predictive accuracy. Extensive experiments were conducted and compared against traditional machine learning models (logistic regression, random forest, XGBoost), baseline deep learning models (MLP, standalone CNN, standalone LSTM), and state-of-the-art hybrid classifiers. The proposed DMO-CNN-LSTM model achieved the highest classification performance with an accuracy of 96.1%, F1-score of 94.6%, and ROC-AUC of 0.96, significantly outperforming other models. Additional analyses, including confusion matrix, ROC curves, training convergence plots, and statistical evaluations confirm the robustness and generalizability of the approach. These findings suggest that the DMO-CNN-LSTM framework offers a powerful and interpretable tool for intelligent diabetes prediction, with strong potential for integration into real-world clinical decision-support systems.
Optimizing resume information extraction through TSHD segmentation and advanced deep learning techniques Abuhamdah, Anmar; Al-Shabi, Mohammed; Jawarneh, Sana
Indonesian Journal of Electrical Engineering and Computer Science Vol 40, No 3: December 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v40.i3.pp1453-1465

Abstract

This research focuses on a significant factor in the natural language processing area, which is extracting information from unstructured textual data through efficient methods in order to pull useful insights and structured representations from this data. This research attempts to boost the effectiveness of information retrieval systems through computational analysis. This paradigm is explored in this work using question answering models in an extractive style, a modern information extraction approach, creating a new methodology combining the topic segmentation based on headings detection (TSHD) segmentation algorithm and deep learning methods. The TSHD algorithm breaks documents into sections in which certain topics are addressed. Refined extraction models are then used to process these disjoint segments leading to more accurate and contextjudicious extraction compared to naive whole-document extraction approaches. We empirically validate this approach using the stanford question answering dataset (SQuAD) 1.1 dataset, with a specific adaptation to resumes. Experimental results show that the performance metrics increase by 7.4% in exact match (EM) and by 7.8% in F1-score. This can be concluded from these results illustrating the feasibility of the proposed approach in the automated information extraction frameworks such as resume processing.