cover
Contact Name
-
Contact Email
-
Phone
-
Journal Mail Official
-
Editorial Address
-
Location
Kota yogyakarta,
Daerah istimewa yogyakarta
INDONESIA
International Journal of Advances in Intelligent Informatics
ISSN : 24426571     EISSN : 25483161     DOI : 10.26555
Core Subject : Science,
International journal of advances in intelligent informatics (IJAIN) e-ISSN: 2442-6571 is a peer reviewed open-access journal published three times a year in English-language, provides scientists and engineers throughout the world for the exchange and dissemination of theoretical and practice-oriented papers dealing with advances in intelligent informatics. All the papers are refereed by two international reviewers, accepted papers will be available on line (free access), and no publication fee for authors.
Arjuna Subject : -
Articles 16 Documents
Search results for , issue "Vol 12, No 1 (2026): February 2026" : 16 Documents clear
Predict customer churn in the banking sector: a machine learning approach with imbalanced data handling techniques Lee, Jong-Hwa; Nguyen, Van-Ho; Le, Hoanh-Su
International Journal of Advances in Intelligent Informatics Vol 12, No 1 (2026): February 2026
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Customer value analysis is a critical component in formulating effective marketing and customer relationship management (CRM) strategies, especially in sectors where client movement and strong competition are prevalent A key element of this process lies in enhancing customer retention rates, as retaining existing clients is typically more cost-effective than acquiring new ones and directly contributes to improving overall profitability. In today’s banking environment, where customers can choose from a broad range of financial services, customer churn has become a critical challenge. Predicting and understanding attrition enables financial institutions to implement proactive and targeted interventions to protect market share and strengthen customer loyalty. This study analyzes a real-world dataset comprising 10,127 customer records from a commercial bank, where only 1,627 entries correspond to churned customers, thereby presenting a notable class imbalance problem. To address this, several data balancing techniques were applied, including class-weight adjustment, SMOTE, SMOTE-Tomek Links, and SMOTE-ENN. Multiple machine learning models - Support Vector Machine, Random Forest, Decision Tree, Logistic Regression, AdaBoost - were evaluated to identify the most effective approach for churn prediction. The Random Forest model achieved an 86% F1-score after applying SMOTE-Tomek Links, demonstrating strong predictive capability. The key contribution of this study lies in integrating advanced resampling techniques with ensemble learning and customer behavioral insights to improve churn prediction performance and support data-driven retention strategies in the banking sector.
PIFC-CLD: Poison image traceback via feature clustering and euclidean norm distance for clean-label attacks in deep neural networks Abomakhleb, Abomakhleb; Jalil, Kamarularifin Abd; Buja, Alya Geogiana; Alhammadi, Abdulraqeb
International Journal of Advances in Intelligent Informatics Vol 12, No 1 (2026): February 2026
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v12i1.2206

Abstract

Clean-label poisoning attacks pose a stealthy and potent threat to deep neural networks (DNNs), particularly when models rely on publicly available or outsourced training data. Among these attacks, the Bullseye Polytope method is highly transferable and can evade state-of-the-art defenses such as deep k-NN. To counter this, we propose Poison Image Traceback via Feature Clustering (PIFC-CLD), a novel forensic approach that leverages Euclidean norm distances to detect and trace clean-label attacks in DNNs. PIFC exploits the geometric consistency of feature representations to identify poisoned samples responsible for model misclassifications. Unlike traditional majority-vote-based defenses, PIFC-CLD performs clustering in feature space and detects poisoned samples based on their proximity to misclassified targets using Euclidean distance. We evaluate our approach under Bullseye Polytope attack scenarios using the CIFAR-10 dataset and WideResNet architectures. PIFC-CLD achieves 99% precision, 95% recall, and a 96% F1 score at k = 25 and ε = 0.2, demonstrating robust performance against Bullseye Polytope attacks. Furthermore, our algorithm exhibits strong resilience to parameter variations while minimizing false positives and preserving model integrity. This work bridges the gap between digital forensics and adversarial machine learning, offering a lightweight, model-agnostic, and interpretable solution for secure model training in adversarial environments.
Precise cervical cancer cell boundary denoising and segmentation with adaptive wavelet-spectral enhancement Mukku, Lalasa; Laman, Manjunath Ramanna; Hegde, Lavanya; Mahapurush, Prathima; Mahapurush, Shivanandaswamy
International Journal of Advances in Intelligent Informatics Vol 12, No 1 (2026): February 2026
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v12i1.2267

Abstract

Accurate segmentation of cell nuclei in cervical cytology images is crucial for automated cervical cancer screening, yet existing methods struggle with blurred boundaries, noise-induced degradation, and topologically implausible predictions. The current research proposes Cell-Seg Tool, a novel triplet-branch diffusion AI tool that synergistically integrates three innovations to address these limitations. The Wavelet-Enhanced Contour Refinement Branch employs a learnable multi-scale discrete wavelet transform with adaptive coefficient attention to dynamically enhance boundary features across horizontal, vertical, and diagonal orientations. The Adaptive Spectral Noise Suppression module performs dual-domain processing using DCT-based filtering and uncertainty-guided fusion, coupled with bidirectional anchor semantic feedback to couple cross-branch information. The Topology-Aware Hybrid Loss integrates a focal Tversky loss, a persistent homology loss, a directional boundary loss, a skeleton completeness loss, and a diffusion-noise MSE loss for multi-objective optimization. Comprehensive experiments on multiple datasets demonstrate superior performance, achieving 94.45% Dice coefficient and 19.2% reduction in boundary localization error compared to state-of-the-art methods. Unlike prior work that applies these techniques independently, this work demonstrates that their adaptive, synergistic integration within a diffusion-based framework yields substantial improvements in boundary accuracy and topological correctness.
Fixed sherwood duel optimization for time series imputation Utama, Agung Bella Putra; Wibawa, Aji Prasetya; Handayani, Anik Nur; Nafalski, Andrew
International Journal of Advances in Intelligent Informatics Vol 12, No 1 (2026): February 2026
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v12i1.2396

Abstract

Missing values remain a persistent challenge in time-series data, particularly within large-scale monitoring systems where reliable forecasting and evaluation are essential. Incomplete records often arise from irregular reporting, infrastructure limitations, or system failures, leading to biased analyses and inaccurate predictions. Traditional imputation methods, such as mean, median, and mode substitution, provide computational efficiency but oversimplify temporal structures. At the same time, more advanced approaches, including Multiple Imputation by Chained Equations (MICE) and K-Nearest Neighbors (KNN), offer improvements yet remain sensitive to data distribution and model configuration. To address this gap, this study introduces Sherwood Duel Optimization (SDO). This socio-inspired framework reconceptualizes imputation as a deterministic duel-based optimization problem. In its fixed form, SDO generates multiple candidate imputations and selects the most robust replacement value using a composite multi-metric scoring mechanism that integrates forecasting accuracy and explanatory power. The framework was evaluated using multivariate educational time-series data and further validated across heterogeneous SDG-related domains, and compared against classical and advanced baselines across three forecasting models. Experimental results demonstrate that SDO consistently outperforms existing methods, reducing forecasting error (MAPE) by more than 40%, achieving the lowest RMSE, and producing R² values exceeding 0.95. Statistical testing confirms that these improvements are significant across experimental configurations. These findings highlight the potential of SDO as a reliable, interpretable, and computationally efficient optimization-based imputation framework. By strengthening data reliability at the reconstruction stage, SDO enhances the credibility of downstream forecasting and decision-making in institutional and sustainability-oriented monitoring systems.
Non-destructive classification of sugarcane milling feasibility using deep learning: A comparative study of VGG19 and ResNet50 Indrianti, Nur; Leuveano, Raden Achmad Chairdino; Rustamaji, Heru Cahya; Ferriyan, Andrey; Mulyono, Panut; Wijaya, Bayu Prasetya
International Journal of Advances in Intelligent Informatics Vol 12, No 1 (2026): February 2026
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Assessing sugarcane quality is crucial for ensuring both economic value and processing efficiency in sugar production. Conventional approaches, such as refractometer-based Brix measurements, are destructive, labor-intensive, and unsuitable for large-scale or rapid field evaluations. This highlights the need for non-destructive, automated solutions that can deliver accurate and scalable assessments. This study proposes a deep learning framework for classifying sugarcane internodes into two quality categories based on Brix values: unsuitable for milling (<16 °Brix) and suitable for milling (≥16 °Brix) using image-based analysis. The dataset consists of two configurations: Luar1 (single internode) and Luar2 (a split internode with two outer sides placed side by side), each photographed against white and black backgrounds. Preprocessing, data augmentation, and transfer learning were applied using VGG19 and ResNet50 under a two-phase strategy. Phase 1 involved freezing the backbone layers (50 epochs), and Phase 2 involved fine-tuning (100 epochs). The results demonstrate that fine-tuning significantly enhanced model performance. VGG19 achieved accuracies between 72.12% and 75.06%, while ResNet50 consistently outperformed it, reaching 78.85% with the Luar2_Putih dataset. Confusion matrix analysis further confirmed ResNet50’s superior ability to minimize misclassification, particularly for high-quality canes that are crucial for milling feasibility. These findings advance non-destructive quality assessment in sugarcane and support the United Nations Sustainable Development Goals (SDG 2, SDG 9, and SDG 12) by strengthening food security through improved crop utilization, fostering innovation in agricultural technologies, and promoting sustainable production practices in the sugar industry.
Towards a high-accuracy framework for quranic reciter recognition using deep learning and a large-scale benchmark dataset Al-Omari, Ibrahim; Alshargabi, Asma
International Journal of Advances in Intelligent Informatics Vol 12, No 1 (2026): February 2026
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v12i1.2288

Abstract

Speaker recognition aims to identify who is speaking from their voice and is widely used in security, personalization, and archival search. A related, culturally significant task is recognizing Qur’ān reciters from their recitations. The Quran is the central religious text of Islam and is recited with codified pronunciation and melodic rules (tajwīd and maqām). Distinguishing reciters can support digital archiving, educational feedback, and retrieval of stylistically similar recitations. We present a controlled comparison of deep learning approaches for Qur’ān reciter recognition, contrasting feature-based pipelines with end-to-end waveform models under a unified protocol. Using ṣūrah Al-Tawbah recitations from 12 reciters (18,540 clips; fixed 2 s segments), an X-Vector architecture with Mel-Frequency Cepstral Coefficients (MFCCs) attains perfect test performance (accuracy/precision/recall/F1 =100%). Convolutional Neural Network (CNN) and Bidirectional LSTM (BLSTM) baselines achieve near-optimal results (99.96% accuracy and F1), while an end-to-end X-Vector trained on raw waveforms reaches 98.77% accuracy (F1 = 0.9877). These findings indicate that explicit spectral features remain advantageous for short segments requiring fine acoustic discrimination, although end-to-end learning is competitive and simplifies preprocessing. We release the curated dataset with standardized splits and training scripts to enable reproducible benchmarking. Overall, feature-informed X-Vectors constitute a strong reference for short-segment reciter identification, and our results motivate hybrid/self-supervised front ends, tajwīd-aware analysis, and real-time, on-device deployment.

Page 2 of 2 | Total Record : 16