cover
Contact Name
Imam Much Ibnu Subroto
Contact Email
imam@unissula.ac.id
Phone
-
Journal Mail Official
ijai@iaesjournal.com
Editorial Address
-
Location
Kota yogyakarta,
Daerah istimewa yogyakarta
INDONESIA
IAES International Journal of Artificial Intelligence (IJ-AI)
ISSN : 20894872     EISSN : 22528938     DOI : -
IAES International Journal of Artificial Intelligence (IJ-AI) publishes articles in the field of artificial intelligence (AI). The scope covers all artificial intelligence area and its application in the following topics: neural networks; fuzzy logic; simulated biological evolution algorithms (like genetic algorithm, ant colony optimization, etc); reasoning and evolution; intelligence applications; computer vision and speech understanding; multimedia and cognitive informatics, data mining and machine learning tools, heuristic and AI planning strategies and tools, computational theories of learning; technology and computing (like particle swarm optimization); intelligent system architectures; knowledge representation; bioinformatics; natural language processing; multiagent systems; etc.
Arjuna Subject : -
Articles 1,808 Documents
Text summarization: BART, RF, and hybrid BART-RF algorithm comparison Zamzam, Muhammad Adib; Buono, Agus; Haryanto, Toto
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 15, No 1: February 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v15.i1.pp929-940

Abstract

Data and information accumulate quantitatively and qualitatively. Abundant text data are posted on the internet. The number correlates to the complexity of the summarization. Automatic text summarization (ATS) is one of the most challenging tasks in natural language processing (NLP). ATS approached in three ways: extractive, abstractive, and hybrid. Hybrid approach combines both extractive and abstractive. This research tests and compares performance of bidirectional auto-regressive transformer (BART) and random forest (RF) individually and the performance combination of hybrid BART and RF in ATS. The research shows that individually, BART and RF recall-oriented understudy for gisting evaluation (ROUGE) scores are having quite differences. Consecutively, ROUGE RF scores in R1, R2, and RL are 51.45, 45.52, and 54.58 respectively. Meanwhile, ROUGE BART scores are 32.78, 16.17, and 32.19. Consecutively, average ROUGE RF, BART, and RF×BART F-measure are 45.73, 21.38, and 31.31. RF has the highest average score. ATS hybrid RF×BART is shown to be performed better than the default BART. The average ROUGE F-measures for RF×BART obtain moderate score at 31.31. This score is better than the default BART’s ROUGE score. RF×BART can be an alternative to the effective hybrid approach.
Neuro-DANet: dual attention deep neural network long short term memory for autism spectrum disorder detection Hanumantharayappa, Sujatha; Bharamagoudra, Manjula Rudragouda
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 15, No 1: February 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v15.i1.pp810-823

Abstract

Autism spectrum disorder (ASD) is neurological illness affects ability of individuals to communicate and interact socially, and it is diagnosed in any time. Early detection of ASD is especially significant due to its subtle characteristics and high costs associated with the detection process. Traditional deep learning (DL) models struggle to capture intricate spatiotemporal dependencies in functional magnetic resonance imaging (fMRI) data, resulting in minimized detection performance and poor generalization. To address these drawbacks, the proposed Neuro-DANet combines a dual-attention deep neural network (DA-DNN) with long short term memory (LSTM) to efficiently learn spatial and temporal features from fMRI scans. The continuous wavelet transform (CWT) is used to extract multi-scale features and the principal component analysis (PCA) is utilized to dimensionality reduction, which enhances robustness and efficacy. The dual self-attention mechanism improves the interpretability of the model by focusing on critical brain regions and time steps that are most relevant to ASD severity. The developed Neuro-DANet obtains the highest accuracy of 98.51% on autism brain imaging data exchange (ABIDE)-I and 98.81% on ABIDE-II datasets when compared with traditional algorithms.
Brain tumor segmentation and classification using artificial hummingbird optimization algorithm Karthikeyan, Radhakrishnan; Muruganandham, Arappaleeswaran
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 15, No 1: February 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v15.i1.pp429-442

Abstract

The time and medical personnel experience are the only factors that determine whether brain tumors can be manually identified from numerous magnetic resonance imaging (MRI) pictures in medical practice. Many frameworks based on brain tumors are diagnosed using both deep learning and machine learning. This study proposes a Wasserstein deep convolutional generative adversarial network (WDCGAN) optimized using the artificial hummingbird optimization algorithm (AHBOA) for brain tumor segmentation and classification (SCBT). First, the BraTS dataset is used to gather the input data. Then it is pre-processed consuming adaptive self guided filtering (ASGF) and the result is segmented using fuzzy possibilistic C-ordered mean clustering (FPCOMC). After that, features are extracted using the dual tree complex discrete wavelet transform (DT-CDWT). The characteristics of feature extracted are fed to WDCGAN for effectively categorize the various parameters. Then the proposed MATLAB is used to implement the technique, and the performance measurements like F1-score, accuracy, error rate, precision, sensitivity, mean square error, receiver operating characteristic (ROC), and computational time are analyzed. The WDCGAN-AHBOA-SCBT method significantly improves precision in SCBT by integrating adaptive optimization strategies, resulting in 32.18, 32.75, and 32.90% higher precision in contrast to current techniques. This demonstrates that the approach is more accurate and effective, making it a reliable tool for medical diagnosis.
An AI-driven framework for efficient and accurate calibration of electricity meters using extreme gradient boosting Rosalina, Rosalina; Afriliana, Nunik
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 15, No 1: February 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v15.i1.pp154-163

Abstract

Calibration testing plays a vital role in electricity meter manufacturing to guarantee measurement accuracy and compliance with industry standards. In practice, however, conventional calibration methods are often hindered by lengthy test cycles and the high cost of expanding test bench capacity. This study proposes a data-driven approach to address these limitations by applying machine learning techniques to optimize calibration testing. An extreme gradient boosting (XGBoost) regression model, enhanced through systematic hyperparameter tuning and feature engineering, was developed to predict calibration outcomes using data obtained from existing production test benches. When evaluated under real manufacturing line conditions, the proposed method shortened calibration runtime by about 55% compared with manual procedures relying on power supply units (PSU) and standard meter calculations, while maintaining reliable measurement accuracy. The framework also achieved lower root mean square error (RMSE), demonstrating improved predictive performance. In addition to reporting these results, the study describes the preprocessing pipeline, model selection process, and optimization strategy, providing a practical and replicable framework for integrating artificial intelligence (AI) into industrial calibration processes.
Deep learning-based spam detection for WhatsApp chatbot fallback reduction Sadewo, Satrio; Zahra, Amalia
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 15, No 1: February 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v15.i1.pp909-918

Abstract

Chatbots on WhatsApp are widely used for customer service, but their effectiveness is often undermined by fallback responses when user input cannot be understood. A major cause of these fallbacks is unsolicited spam, which disrupts interactions and reduces service quality. This study develops and evaluates a spam detection system aimed at reducing fallback rates and enhancing user experience. A comparative analysis was conducted between traditional machine learning models (support vector machine (SVM) and decision tree (DT)) and advanced deep learning architectures, including long short-term memory (LSTM) variants (vanilla, bidirectional, stacked, convolutional neural network (CNN)-LSTM, and encoder-decoder) and transformer-based models (bidirectional encoder representations from transformers (BERT)-base, DistilBERT, and cross-lingual language model robustly optimized BERT pretraining approach (XLM-ROBERTa)). Using 170,000 messages sampled from 18 million interactions collected between July 2022 and December 2023, the models were assessed with standard evaluation metrics. Results show that CNN-LSTM and DistilBERT achieved the most robust performance. CNN-LSTM attained a precision of 0.92, recall of 0.91, F1-score of 0.91, and accuracy of 0.94, while DistilBERT achieved precision of 0.92, recall of 0.89, F1-score of 0.90, and accuracy of 0.93. These findings highlight their superior ability to capture contextual patterns in spam messages. Implementing such models is expected to significantly lower fallback rates, thereby improving chatbot reliability and user satisfaction.
Explainable deep learning for scalable record linkage: a TabNet-based framework for structured data integration Zahrae Saber, Fatima; Choukri, Ali; Amnai, Mohamed; Waga, Abderrahim
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 15, No 1: February 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v15.i1.pp725-743

Abstract

Record linkage is considered a fundamental process for ensuring data quality and reliability, with critical applications in domains such as healthcare, finance, and commerce. A machine learning-based approach for optimizing record linkage in structured datasets is presented in this paper. By integrating hybrid blocking methods (combining standard blocking and sorted neighborhood approaches) with advanced similarity measures, computational overhead is significantly reduced while high accuracy is maintained. The performance of TabNet, a deep learning model designed for tabular data, is compared with traditional deep neural networks (DNNs) in the classification phase. Experimental results on a synthetic dataset of 5,000 records demonstrate that comparable precision and recall are achieved by TabNet to DNNs while execution time is reduced by over 79%. The scalability and efficiency of the proposed method are highlighted by these findings, making it well-suited for large-scale data management tasks. Practical and computationally efficient solutions for record linkage in the era of big data are contributed to by this work.
A novel BERT-long short-term memory hybrid model for effective credit card fraud detection Ndama, Oussama; Ndama, Safae; Bensassi, Ismail; En-Naimi, El Mokhtar
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 15, No 1: February 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v15.i1.pp788-797

Abstract

In the rapidly evolving landscape of financial transactions, the detection of fraudulent activities remains a critical challenge for financial institutions worldwide. This study introduces a novel bidirectional encoder representation from transformers (BERT)–long short-term memory (LSTM) hybrid model that integrates both textual and numerical data to enhance credit card fraud detection. Leveraging BERT for deep contextual embeddings and LSTM for sequence analysis, the model provides a comprehensive approach that surpasses traditional fraud detection systems primarily based on numerical analysis. On the validation set, the model achieved a recall of 100% and an accuracy of 99.11%, highlighting strong effectiveness in identifying fraudulent transactions under class imbalance. Through rigorous evaluation, the model demonstrated exceptional accuracy and reliability, promising improvements in fraud detection and mitigation. This paper details the development and validation of the hybrid model, emphasizing its use of mixed data types to capture complex patterns in transaction data. The results indicate a new frontier in fraud detection by combining natural language processing (NLP) and sequential data analysis to create a robust solution for real-world applications, supporting the security and integrity of financial systems globally.
Predicting university student dropouts in Latin America using machine learning Andrade-Arenas, Laberiano; Rubio Paucar, Inoc; Giraldo Retuerto, Margarita; Yactayo-Arias, Cesar
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 15, No 1: February 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v15.i1.pp628-641

Abstract

In the university context, student dropout has become one of the most recurring problems, both in the short and long term. The objective of this research was to develop a predictive model using the random forest (RF) algorithm to identify patterns associated with university dropout. To achieve this, the knowledge discovery in databases (KDD) methodology was applied, which encompasses the stages of selection, preprocessing, transformation, data mining, and interpretation of results. The RF model demonstrated superior performance compared to other evaluated models, achieving an accuracy of 87%, a precision of 86%, a recall of 85%, an F1-score of 85%, and an receiver operating characteristic (ROC) area under the curve (AUC) of 0.91, highlighting its high predictive capability compared to other techniques analyzed. Therefore, the application of the proposed model is recommended in various university institutions in order to identify potential dropout cases at an early stage.
Change detection and classification of satellite images using convolutional neural network Srinivasaiah, Raghavendra; Kumar Jankatti, Santosh; Ramanna Lamani, Manjunath; Jinachandra, Niranjana Shravanabelagola
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 15, No 1: February 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v15.i1.pp329-337

Abstract

Satellite and airborne imagery, collectively known as earth observation imagery, are images of the earth collected from spaceborne or airborne platforms such as satellites and aircraft. Over the last 100 years, with the fast development of aviation, space exploration, and imaging technologies, the coming together of these technologies has been inevitable. Earth observation imagery has many applications in regional planning, geology, reconnaissance, fishing, meteorology, oceanography, agriculture, biodiversity conservation, forestry, landscape, intelligence, cartography, education, and warfare. With the rise in the number of these airborne and spaceborne imaging platforms being deployed by government and private entities alike, the capability to sift through and analyze vast amounts of data generated by these platforms is the need of the hour. With the exponential improvement in the computational capabilities of computers over the last half a century, analysts are exceedingly moving towards the practice of artificial intelligence, machine learning (ML), and computer vision solutions to automate a large part of the processes employed in analyzing earth observation imagery. This work recommends a workflow to perceive and classify changes in earth observation imagery of a given area by utilizing the vast flexibility that convolutional neural networks (CNN) provide.
Pneumonia classification from chest X-rays using significant feature selection and machine learning Chodagam, Yugandhar; Hiremath, Manjunatha
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 15, No 1: February 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v15.i1.pp592-603

Abstract

The chest X-ray images of normal lungs differ only subtly from those of lungs with pneumonia, making image-based diagnosis highly challenging. To address this issue, we developed a machine learning (ML)-based, lightweight, end-to-end Python package that processes chest X-ray images, implements robust feature selection methods, and classifies the images using various algorithms. While many studies have focused on improving classification accuracy using newer methods, few have addressed the interpretability of the extracted features or the growing computational demands of complex models. We used four publicly available datasets and extracted first-order, textural, and transform-based radiomic features to test our package. Features were selected using the Shapley additive explanations (SHAP) combined with recursive feature elimination (RFE) and stability selection algorithms. Our final solution contains a method that extracts a finite set of features identified by stability selection and feeds them as inputs into classical ML algorithms. Our model achieved 98% accuracy on the primary dataset, and 97%±1, 96%±2, and 94%±2% accuracy on the other three datasets. Our approach is fast, self-contained, and requires only an ideal set of features, making it suitable for resource-constrained clinical environments.

Filter by Year

2012 2026


Filter By Issues
All Issue Vol 15, No 1: February 2026 Vol 14, No 6: December 2025 Vol 14, No 5: October 2025 Vol 14, No 4: August 2025 Vol 14, No 3: June 2025 Vol 14, No 2: April 2025 Vol 14, No 1: February 2025 Vol 13, No 4: December 2024 Vol 13, No 3: September 2024 Vol 13, No 2: June 2024 Vol 13, No 1: March 2024 Vol 12, No 4: December 2023 Vol 12, No 3: September 2023 Vol 12, No 2: June 2023 Vol 12, No 1: March 2023 Vol 11, No 4: December 2022 Vol 11, No 3: September 2022 Vol 11, No 2: June 2022 Vol 11, No 1: March 2022 Vol 10, No 4: December 2021 Vol 10, No 3: September 2021 Vol 10, No 2: June 2021 Vol 10, No 1: March 2021 Vol 9, No 4: December 2020 Vol 9, No 3: September 2020 Vol 9, No 2: June 2020 Vol 9, No 1: March 2020 Vol 8, No 4: December 2019 Vol 8, No 3: September 2019 Vol 8, No 2: June 2019 Vol 8, No 1: March 2019 Vol 7, No 4: December 2018 Vol 7, No 3: September 2018 Vol 7, No 2: June 2018 Vol 7, No 1: March 2018 Vol 6, No 4: December 2017 Vol 6, No 3: September 2017 Vol 6, No 2: June 2017 Vol 6, No 1: March 2017 Vol 5, No 4: December 2016 Vol 5, No 3: September 2016 Vol 5, No 2: June 2016 Vol 5, No 1: March 2016 Vol 4, No 4: December 2015 Vol 4, No 3: September 2015 Vol 4, No 2: June 2015 Vol 4, No 1: March 2015 Vol 3, No 4: December 2014 Vol 3, No 3: September 2014 Vol 3, No 2: June 2014 Vol 3, No 1: March 2014 Vol 2, No 4: December 2013 Vol 2, No 3: September 2013 Vol 2, No 2: June 2013 Vol 2, No 1: March 2013 Vol 1, No 4: December 2012 Vol 1, No 3: September 2012 Vol 1, No 2: June 2012 Vol 1, No 1: March 2012 More Issue