cover
Contact Name
-
Contact Email
-
Phone
-
Journal Mail Official
-
Editorial Address
-
Location
Kota yogyakarta,
Daerah istimewa yogyakarta
INDONESIA
International Journal of Advances in Intelligent Informatics
ISSN : 24426571     EISSN : 25483161     DOI : 10.26555
Core Subject : Science,
International journal of advances in intelligent informatics (IJAIN) e-ISSN: 2442-6571 is a peer reviewed open-access journal published three times a year in English-language, provides scientists and engineers throughout the world for the exchange and dissemination of theoretical and practice-oriented papers dealing with advances in intelligent informatics. All the papers are refereed by two international reviewers, accepted papers will be available on line (free access), and no publication fee for authors.
Arjuna Subject : -
Articles 330 Documents
Single-input and multi-input local binary pattern classification Manga, Abdul Rachman; Handayani, Anik Nur; Herwanto, Heru Wahyu; Asmara, Rosa Andrie; Raja, Roesman Ridwan
International Journal of Advances in Intelligent Informatics Vol 12, No 1 (2026): February 2026
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v12i1.2183

Abstract

Identification and classification of species are crucial for maintaining genetic diversity and supporting sustainable agricultural practices. The Toraja Buffalo, a unique type of buffalo in Indonesia, holds high cultural and economic value. Accurate classification of this species is essential to preserving genetic resources and improving breeding programs. Previous studies using single classification methods have shown limitations in complex cases such as the Toraja Buffalo, which has numerous physiological characteristics such as body size, head, horns, tail, and eyes. The purpose of this study is to evaluate and compare the performance of single-classification and multi-category methods for identifying Toraja Buffalo. Several algorithms, including K-Nearest Neighbors (K-NN), Random Forest, Support Vector Machine (SVM), and Naive Bayes, were tested using Local Binary Pattern (LBP) for feature extraction. Decision Tree and others were observed, showing 85.83% accuracy in single-input, while multi-input accuracy reached 92.08%. The multi-input approach consistently improved performance across all algorithms. Multi-input classifiers significantly outperformed single-feature methods, with Random Forest being the most efficient algorithm. Future research could incorporate additional variables such as skin color or genetic profiles to further enhance accuracy.
Collaborative filtering-based group recommender system using sparse autoencoder Bahar, Musthafa Zaki; Baizal, Zinke Abdurahman
International Journal of Advances in Intelligent Informatics Vol 12, No 1 (2026): February 2026
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v12i1.1702

Abstract

The development of technology makes the distribution of information easier and faster, but leads to information overload. A recommender system is one tool to overcome information overload, while the collaborative filtering (CF) paradigm is a widely used approach in recommender systems. The recommender system generally focuses on individual recommendations, but in real conditions, recommendations for a group are often needed, for example, when we want to listen to music with friends, or we plan a vacation with family. Many prior studies have used the CF paradigm with matrix factorization to build group recommender systems. Matrix factorization has been shown to alleviate the sparsity problem; however, it does not fully resolve it. Therefore, we propose an approach that uses a sparse autoencoder to address this sparsity issue. We chose the sparse autoencoder because it can effectively capture latent patterns in sparse data by learning a compressed representation while retaining important features crucial for accurate recommendations. We built a group recommender system with three different group sizes and aggregation approaches. For evaluation, we use the root-mean-square error (RMSE) and the mean absolute error (MAE). Test results indicate that the sparse autoencoder outperforms matrix factorization in terms of RMSE and MAE. This study improves group recommender systems by addressing data sparsity using a sparse autoencoder. The proposed approach enhances recommendation accuracy compared to traditional matrix factorization methods.
Underwater image enhancement with fuzzy histogram equalization and adaptive color correction Suharyanto, Suharyanto; Andono, Pulung Nurtantio; Fanani, Ahmad Zainul; Pujiono, Pujiono
International Journal of Advances in Intelligent Informatics Vol 12, No 1 (2026): February 2026
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v12i1.2174

Abstract

Marine exploration continues to increase as new technologies, such as computer vision implemented in underwater vehicles and robots, develop. Identifying underwater objects is challenging due to environmental conditions, including poor lighting and color absorption in the viewed image. Underwater image enhancement has been widely applied to overcome these obstacles. Therefore, this study presents a new workflow for improving the quality of underwater images. A combination of the fuzzy histogram equalization (FHE) and adaptive color correction (ACC) methods is used to increase contrast and restore absorbed colors. This study proposes combining FHE and ACC to improve underwater image quality, using the FHE method with the FHEACC method. The results of the UIQM and ENTROPY metrics obtained the highest values, while UCIQE ranked third. This shows that the image quality improved using the FHEACC combination method is objectively better than that achieved with the HE, AHE, CLAHE, FHE, IBLA, RCP, and UDCP methods, especially in maintaining color balance. This research can introduce a new workflow to improve the quality of underwater images by combining Fuzzy Histogram Equalization and Adaptive Color Correction methods, thereby supporting the optimization of underwater image identification systems in wild environments using computer vision technology.
A comprehensive comparative analysis of chicken meat classification techniques through machine learning models Anraeni, Siska; Lahuddin, Harlinda; Ramdaniah, Ramdaniah; Melani, Erika Riski; Amalia, Andi Cici; Amaliah, Tazkirah
International Journal of Advances in Intelligent Informatics Vol 12, No 1 (2026): February 2026
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v12i1.2014

Abstract

This study develops a digital image processing technique to distinguish between fresh and rotten chicken. Chicken freshness has a significant impact on public health and industry sustainability. This study uses a multi-stage approach including data acquisition, preprocessing, feature extraction, and classification. A total of 1,000 chicken images were obtained, consisting of 800 images for training and 200 images for testing, with a proportion of 80:20. Feature extraction was performed using a combination of the HSI (Hue, Saturation, Intensity) color model to capture the color characteristics of chicken and the Local Binary Pattern (LBP) to extract texture information. Classification was performed using the K-Nearest Neighbor (KNN) algorithm with various K values and distance metrics. The experimental results show that the combination of color and texture features provides higher accuracy than using either feature alone. The best model using HSI and LBP feature extraction with K = 1 and K = 3 in the Euclidean distance metric achieved the highest accuracy of 95.4%. With a promising level of accuracy, this method can be applied in automated inspections in the poultry supply chain, improving food safety and helping consumers make better purchasing decisions. However, the main challenge in this study is the variation in lighting during image capture, which causes the fresh and rotten chicken feature values to overlap, thus hindering perfect classification.
Predict customer churn in the banking sector: a machine learning approach with imbalanced data handling techniques Lee, Jong-Hwa; Nguyen, Van-Ho; Le, Hoanh-Su
International Journal of Advances in Intelligent Informatics Vol 12, No 1 (2026): February 2026
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v12i1.2262

Abstract

Customer value analysis is a critical component in formulating effective marketing and customer relationship management (CRM) strategies, especially in sectors where client movement and strong competition are prevalent. A key element of this process lies in enhancing customer retention rates, as retaining existing clients is typically more cost-effective than acquiring new ones and directly contributes to improving overall profitability. In today’s banking environment, where customers can choose from a broad range of financial services, customer churn has become a critical challenge. Predicting and understanding attrition enables financial institutions to implement proactive and targeted interventions to protect market share and strengthen customer loyalty. This study analyzes a real-world dataset comprising 10,127 customer records from a commercial bank, where only 1,627 entries correspond to churned customers, thereby presenting a notable class imbalance problem. To address this, several data balancing techniques were applied, including class-weight adjustment, SMOTE, SMOTE-Tomek Links, and SMOTE-ENN. Multiple machine learning models - Support Vector Machine, Random Forest, Decision Tree, Logistic Regression, AdaBoost - were evaluated to identify the most effective approach for churn prediction. The Random Forest model achieved an 86% F1-score after applying SMOTE-Tomek Links, demonstrating strong predictive capability. The key contribution of this study lies in integrating advanced resampling techniques with ensemble learning and customer behavioral insights to improve churn prediction performance and support data-driven retention strategies in the banking sector.
PIFC-CLD: Poison image traceback via feature clustering and euclidean norm distance for clean-label attacks in deep neural networks Abomakhleb, Abomakhleb; Jalil, Kamarularifin Abd; Buja, Alya Geogiana; Alhammadi, Abdulraqeb
International Journal of Advances in Intelligent Informatics Vol 12, No 1 (2026): February 2026
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v12i1.2206

Abstract

Clean-label poisoning attacks pose a stealthy and potent threat to deep neural networks (DNNs), particularly when models rely on publicly available or outsourced training data. Among these attacks, the Bullseye Polytope method is highly transferable and can evade state-of-the-art defenses such as deep k-NN. To counter this, we propose Poison Image Traceback via Feature Clustering (PIFC-CLD), a novel forensic approach that leverages Euclidean norm distances to detect and trace clean-label attacks in DNNs. PIFC exploits the geometric consistency of feature representations to identify poisoned samples responsible for model misclassifications. Unlike traditional majority-vote-based defenses, PIFC-CLD performs clustering in feature space and detects poisoned samples based on their proximity to misclassified targets using Euclidean distance. We evaluate our approach under Bullseye Polytope attack scenarios using the CIFAR-10 dataset and WideResNet architectures. PIFC-CLD achieves 99% precision, 95% recall, and a 96% F1 score at k = 25 and ε = 0.2, demonstrating robust performance against Bullseye Polytope attacks. Furthermore, our algorithm exhibits strong resilience to parameter variations while minimizing false positives and preserving model integrity. This work bridges the gap between digital forensics and adversarial machine learning, offering a lightweight, model-agnostic, and interpretable solution for secure model training in adversarial environments.
Precise cervical cancer cell boundary denoising and segmentation with adaptive wavelet-spectral enhancement Mukku, Lalasa; Laman, Manjunath Ramanna; Hegde, Lavanya; Mahapurush, Prathima; Mahapurush, Shivanandaswamy
International Journal of Advances in Intelligent Informatics Vol 12, No 1 (2026): February 2026
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v12i1.2267

Abstract

Accurate segmentation of cell nuclei in cervical cytology images is crucial for automated cervical cancer screening, yet existing methods struggle with blurred boundaries, noise-induced degradation, and topologically implausible predictions. The current research proposes Cell-Seg Tool, a novel triplet-branch diffusion AI tool that synergistically integrates three innovations to address these limitations. The Wavelet-Enhanced Contour Refinement Branch employs a learnable multi-scale discrete wavelet transform with adaptive coefficient attention to dynamically enhance boundary features across horizontal, vertical, and diagonal orientations. The Adaptive Spectral Noise Suppression module performs dual-domain processing using DCT-based filtering and uncertainty-guided fusion, coupled with bidirectional anchor semantic feedback to couple cross-branch information. The Topology-Aware Hybrid Loss integrates a focal Tversky loss, a persistent homology loss, a directional boundary loss, a skeleton completeness loss, and a diffusion-noise MSE loss for multi-objective optimization. Comprehensive experiments on multiple datasets demonstrate superior performance, achieving 94.45% Dice coefficient and 19.2% reduction in boundary localization error compared to state-of-the-art methods. Unlike prior work that applies these techniques independently, this work demonstrates that their adaptive, synergistic integration within a diffusion-based framework yields substantial improvements in boundary accuracy and topological correctness.
Fixed sherwood duel optimization for time series imputation Utama, Agung Bella Putra; Wibawa, Aji Prasetya; Handayani, Anik Nur; Nafalski, Andrew
International Journal of Advances in Intelligent Informatics Vol 12, No 1 (2026): February 2026
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v12i1.2396

Abstract

Missing values remain a persistent challenge in time-series data, particularly within large-scale monitoring systems where reliable forecasting and evaluation are essential. Incomplete records often arise from irregular reporting, infrastructure limitations, or system failures, leading to biased analyses and inaccurate predictions. Traditional imputation methods, such as mean, median, and mode substitution, provide computational efficiency but oversimplify temporal structures. At the same time, more advanced approaches, including Multiple Imputation by Chained Equations (MICE) and K-Nearest Neighbors (KNN), offer improvements yet remain sensitive to data distribution and model configuration. To address this gap, this study introduces Sherwood Duel Optimization (SDO). This socio-inspired framework reconceptualizes imputation as a deterministic duel-based optimization problem. In its fixed form, SDO generates multiple candidate imputations and selects the most robust replacement value using a composite multi-metric scoring mechanism that integrates forecasting accuracy and explanatory power. The framework was evaluated using multivariate educational time-series data and further validated across heterogeneous SDG-related domains, and compared against classical and advanced baselines across three forecasting models. Experimental results demonstrate that SDO consistently outperforms existing methods, reducing forecasting error (MAPE) by more than 40%, achieving the lowest RMSE, and producing R² values exceeding 0.95. Statistical testing confirms that these improvements are significant across experimental configurations. These findings highlight the potential of SDO as a reliable, interpretable, and computationally efficient optimization-based imputation framework. By strengthening data reliability at the reconstruction stage, SDO enhances the credibility of downstream forecasting and decision-making in institutional and sustainability-oriented monitoring systems.
Non-destructive classification of sugarcane milling feasibility using deep learning: A comparative study of VGG19 and ResNet50 Indrianti, Nur; Leuveano, Raden Achmad Chairdino; Rustamaji, Heru Cahya; Ferriyan, Andrey; Mulyono, Panut; Wijaya, Bayu Prasetya
International Journal of Advances in Intelligent Informatics Vol 12, No 1 (2026): February 2026
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v12i1.2236

Abstract

Assessing sugarcane quality is crucial for ensuring both economic value and processing efficiency in sugar production. Conventional approaches, such as refractometer-based Brix measurements, are destructive, labor-intensive, and unsuitable for large-scale or rapid field evaluations. This study proposes a non-destructive deep learning framework for classifying sugarcane internodes into two quality categories (< 16 °Bx and ≥16 °Bx) to address existing limitations. Two convolutional neural network architectures, VGG19 and ResNet50, were evaluated utilizing a defined transfer learning and data augmentation methodology. Because of its residual connections, which enable deeper and more stable feature learning, ResNet50 consistently outperformed VGG19, achieving the highest accuracy of 78.85% on the Luar2_Putih dataset. This comparative finding demonstrates that modern residual-based networks provide superior robustness for subtle visual classification tasks in agricultural imaging, while also validating the stability of the proposed two-phase training framework. The study advances AI-driven non-destructive quality assessment by offering a scalable, field-deployable solution that supports sustainable, efficient sugarcane processing in line with the UN Sustainable Development Goals (SDG 2, 9, 12, and 13).
Towards a high-accuracy framework for quranic reciter recognition using deep learning and a large-scale benchmark dataset Al-Omari, Ibrahim; Alshargabi, Asma
International Journal of Advances in Intelligent Informatics Vol 12, No 1 (2026): February 2026
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v12i1.2288

Abstract

Speaker recognition aims to identify who is speaking from their voice and is widely used in security, personalization, and archival search. A related, culturally significant task is recognizing Qur’ān reciters from their recitations. The Quran is the central religious text of Islam and is recited with codified pronunciation and melodic rules (tajwīd and maqām). Distinguishing reciters can support digital archiving, educational feedback, and retrieval of stylistically similar recitations. We present a controlled comparison of deep learning approaches for Qur’ān reciter recognition, contrasting feature-based pipelines with end-to-end waveform models under a unified protocol. Using ṣūrah Al-Tawbah recitations from 12 reciters (18,540 clips; fixed 2 s segments), an X-Vector architecture with Mel-Frequency Cepstral Coefficients (MFCCs) attains perfect test performance (accuracy/precision/recall/F1 =100%). Convolutional Neural Network (CNN) and Bidirectional LSTM (BLSTM) baselines achieve near-optimal results (99.96% accuracy and F1), while an end-to-end X-Vector trained on raw waveforms reaches 98.77% accuracy (F1 = 0.9877). These findings indicate that explicit spectral features remain advantageous for short segments requiring fine acoustic discrimination, although end-to-end learning is competitive and simplifies preprocessing. We release the curated dataset with standardized splits and training scripts to enable reproducible benchmarking. Overall, feature-informed X-Vectors constitute a strong reference for short-segment reciter identification, and our results motivate hybrid/self-supervised front ends, tajwīd-aware analysis, and real-time, on-device deployment.