cover
Contact Name
-
Contact Email
-
Phone
-
Journal Mail Official
-
Editorial Address
-
Location
Kota yogyakarta,
Daerah istimewa yogyakarta
INDONESIA
International Journal of Advances in Intelligent Informatics
ISSN : 24426571     EISSN : 25483161     DOI : 10.26555
Core Subject : Science,
International journal of advances in intelligent informatics (IJAIN) e-ISSN: 2442-6571 is a peer reviewed open-access journal published three times a year in English-language, provides scientists and engineers throughout the world for the exchange and dissemination of theoretical and practice-oriented papers dealing with advances in intelligent informatics. All the papers are refereed by two international reviewers, accepted papers will be available on line (free access), and no publication fee for authors.
Arjuna Subject : -
Articles 330 Documents
Modified particle swarm optimization (MPSO) optimized CNN’s hyperparameters for classification Murinto, Murinto; Winiarti, Sri
International Journal of Advances in Intelligent Informatics Vol 11, No 1 (2025): February 2025
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v11i1.1303

Abstract

This paper proposes a convolutional neural network architectural design approach using the modified particle swarm optimization (MPSO) algorithm. Adjusting hyper-parameters and searching for optimal network architecture from convolutional neural networks (CNN) is an interesting challenge. Network performance and increasing the efficiency of learning models on certain problems depend on setting hyperparameter values, resulting in large and complex search spaces in their exploration. The use of heuristic-based searches allows for this type of problem, where the main contribution in this research is to apply the MPSO algorithm to find the optimal parameters of CNN, including the number of convolution layers, the filters used in the convolution process, the number of convolution filters and the batch size. The parameters obtained using MPSO are kept in the same condition in each convolution layer, and the objective function is evaluated by MPSO, which is given by classification rate. The optimized architecture is implemented in the Batik motif database. The research found that the proposed model produced the best results, with a classification rate higher than 94%, showing good results compared to other state-of-the-art approaches. This research demonstrates the performance of the MPSO algorithm in optimizing CNN architectures, highlighting its potential for improving image recognition tasks.
Finding a suitable chest x-ray image size for the process of Machine learning to build a model for predicting Pneumonia Yothapakdee, Kriengsak; Pugtao, Yosawaj; Charoenkhun, Sarawoot; Boonnuk, Tanunchai; Tamee, Kreangsak
International Journal of Advances in Intelligent Informatics Vol 11, No 1 (2025): February 2025
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v11i1.1897

Abstract

This study focused on algorithm performance and training/testing time, evaluating the most suitable chest X-ray image size for machine learning models to predict pneumonia infection. The neural network algorithm achieved an accuracy rate of 87.00% across different image sizes. While larger images generally yield better results, there is a decline in performance beyond a certain size. Lowering the image resolution to 32x32 pixels significantly reduces performance to 83.00% likely due to the loss of diagnostic features. Furthermore, this study emphasizes the relationship between image size and processing time, empirically revealing that both increasing and decreasing image size beyond the optimal point results in increased training and testing time. The performance was noted with 299x299 pixel images completing the process in seconds. Our results indicate a balance between efficiency, as larger images slightly improved accuracy but slowed down speed, while smaller images negatively impacted precision and effectiveness. These findings assist in optimizing chest X-ray image sizes for pneumonia prediction models by weighing diagnostic accuracy against computational resources.
A deep learning ensemble framework for robust classification of lung ultrasound patterns: covid-19, pneumonia, and normal Morsy, Shereen; Abd-Elsalam, Neveen; Khandil, Ahmed; Elbialy, Ahmed; Youssef, Abou-Bakr
International Journal of Advances in Intelligent Informatics Vol 11, No 1 (2025): February 2025
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v11i1.1966

Abstract

To advance the automated interpretation of lung ultrasound (LUS) data, multiple deep learning (DL) models have been introduced to identify LUS patterns for differentiating COVID-19, Pneumonia, and Normal cases. While these models have generally yielded promising outcomes, they have encountered challenges in accurately classifying each pattern across diverse cases. Therefore, this study introduces an ensemble framework that leverages multiple classification models, optimizing their contributions to the final prediction through a majority voting mechanism. After training seven different classification models, the three models with the highest accuracies were selected. The ensemble incorporates these top-performing models: EfficientNetV2-B0, EfficientNetV2-B2, and EfficientNetV2-B3, and utilizes this framework to classify patterns in LUS images. Compared to individual model performance, the ensemble approach significantly enhances classification accuracy, achieving an accuracy of 99.25% and an F1-score of 99%. In contrast, the standalone models attained accuracies of 97.8%, 97.6%, and 98.1%, with F1-score of approximately 98%. This research highlights the potential of ensemble learning for improving the accuracy and robustness of automated LUS analysis, offering a practical and scalable solution for real-world medical diagnostics. By combining the strengths of multiple models, the proposed framework paves the way for more reliable and efficient tools to assist clinicians in diagnosing lung diseases.
Analyzing risk factors and handling imbalanced data for predicting stroke risk using machine learning Adiwijaya, Adiwijaya; Ramadhan, Nur Ghaniaviyanto
International Journal of Advances in Intelligent Informatics Vol 11, No 1 (2025): February 2025
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v11i1.1678

Abstract

Stroke is a serious medical condition resulting from disturbances in blood flow to the brain, signaling a chronic health issue that requires an immediate response. Principal risk factors increasing the likelihood of stroke include the presence of pre-existing conditions such as Diabetes Mellitus (DM), hypertension, and high cholesterol levels. Effective preventive measures are crucial to minimize stroke risk, and using predictive methods based on data analysis from the clinical examination dataset over the last three years (2019-2021), known as the general checkup (GCU) dataset, presents an innovative approach. This study aims to predict an individual's stroke risk for the following year. In this context, the study also addresses the preprocessing stage of the GCU dataset, which includes solutions for missing values by substituting them with the statistical mean, label encoding, feature correlation analysis using entropy values, and addressing data imbalance with the Adaptive Synthetic (ADASYN) technique. To evaluate their predictive performance, the research involves comparisons among various machine learning models. The outcome of the experiment shows that the Random Forest model is the best model, with 98.7% accuracy and 63.9% F1-Score. This research highlights the importance of preemptive measures against stroke by utilizing predictive techniques on clinical data, with the Random Forest model proving most effective in forecasting stroke probability.
Soft voting ensemble model to improve Parkinson’s disease prediction with SMOTE Unjung, Jumanto; Rofik, Rofik; Sugiharti, Endang; Alamsyah, Alamsyah; Arifudin, Riza; Prasetiyo, Budi; Muslim, Much Aziz
International Journal of Advances in Intelligent Informatics Vol 11, No 1 (2025): February 2025
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v11i1.1627

Abstract

Parkinson's disease is one of the major neurodegenerative diseases that affect the central nervous system, often leading to motor and cognitive impairments in affected individuals. A precise diagnosis is currently unreliable, plus there are no specific tests such as electroencephalography or blood tests to diagnose the disease. Several studies have focused on the voice-based classification of Parkinson's disease. These studies attempt to enhance the accuracy of classification models. However, a major issue in predictive analysis is the imbalance in data distribution and the low performance of classification algorithms. This research aims to improve the accuracy of speech-based Parkinson's disease prediction by addressing class imbalance in the data and building an appropriate model. The proposed new model is to perform class balancing using SMOTE and build an ensemble voting model. The research process is systematically structured into multiple phases: data preprocessing, sampling, model development utilizing a voting ensemble approach, and performance evaluation. The model was tested using voice recording data from 31 people, where the data was taken from OpenML. The evaluation results were carried out using stratified cross-validation and showed good model performance. From the measurements taken, this study obtained an accuracy of 97.44%, with a precision of 97.95%, recall of 97.44%, and F1-Score of 97.56%. This study demonstrates that implementing the soft-voting ensemble-SMOTE method can enhance the model's predictive accuracy.
Gender classification performance optimization based on facial images using LBG-VQ and MB-LBP Hakim, Faruq Abdul; Dharmawan, Tio; Hidayat, Muhamad Arief
International Journal of Advances in Intelligent Informatics Vol 11, No 1 (2025): February 2025
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v11i1.1827

Abstract

In the computer vision and machine learning field, especially for gender classification based on facial images, feature extraction is one of the inseparable parts. Various features can be extracted from images, including texture features. Several prior studies show that the Linde Buzo gray vector quantization (LBG-VQ) and Multi-block local binary pattern (MB-LBP) methods can extract texture features from images. The LBG-VQ produces less optimal performance in gender classification on the FEI facial images dataset. On the other hand, the MB-LBP produces more optimal performance when applied to the FERET facial images dataset. Therefore, this study was conducted to discover the gender classification performance when the LBG-VQ and MB-LBP methods are implemented independently or in combination on the FEI facial images dataset. Three preprocessing stages are involved before extracting images' features: noise removal, illumination adjustment, and image conversion from RGB to grayscale. The extracted features are then used as training material for several classification methods, namely Naïve Bayes, SVM, KNN, Random Forest, and Logistic Regression. Then, the K-Fold Cross Validation method is used to evaluate the trained models. This study discovered that the implementation of MB-LBP tends to show a performance improvement compared to the LBG-VQ. Furthermore, the most optimal classification model, with a performance of 91.928%, was formed by implementing Logistic Regression with MB-LBP on LBG-VQ quantized images. In conclusion, this study successfully formed an optimized gender classification model based on the FEI facial images dataset.
Cocoa bean quality identification using a computer vision-based color and texture feature extraction Basri, Basri; Indrabayu, Indrabayu; Achmad, Andani; Areni, Intan Sari
International Journal of Advances in Intelligent Informatics Vol 11, No 1 (2025): February 2025
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v11i1.1609

Abstract

The current pressing issue in the downstream processing of cocoa beans in cocoa production is a strict quality control system. However, visually inspecting raw cocoa beans reveals the need for advanced technological solutions, especially in Industry 4.0. This paper introduces an innovative image-processing approach to extracting color and texture features to identify cocoa bean quality. Image acquisition involved capturing video with a data acquisition box device connected to a conveyor, resulting in image samples of Good-quality and Poor-quality of non-cutting cocoa beans dataset. Our methodology includes multifaceted advanced pre-processing, sharpening techniques, and comparative analysis of feature extraction methodologies using Hue-Saturation-Value (HSV) and Gray Level Cooccurrence Matrix (GLCM) with correlated features. This study used 15 features with the highest correlation. Machine Learning models using Support Vector Machine (SVM) with some parameter variation value alongside an RBF kernel. Some parameters were measured to compare each approach, and the results show that pre-processing without sharpening achieves better accuracy, notably with the HSV and GLCM combination reaching 0.99 accuracy. Adequate technical lighting during data acquisition is crucial for accuracy. This study sheds light on the efficacy of image processing in cocoa bean quality identification, addressing a critical gap in industrial-scale implementation of technological solutions and advancing quality control measures in the cocoa industry.
Enhancing drug-target affinity prediction through pre-trained language model and gated multi-head attention Khoerunnisa, Ghina; Kurniawan, Isman
International Journal of Advances in Intelligent Informatics Vol 11, No 2 (2025): May 2025
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v11i2.1910

Abstract

Drug development requires accurate drug-target interaction (DTI) information to evaluate a drug's potential. However, existing current methods for estimating DTI are slow and expensive. Deep learning offers an efficient and effective alternative by leveraging sequence data for prediction. Nevertheless, the DTI binary classification approach suffers from a large number of non-interacting pairs, resulting in data imbalance and has a negative impact on performance. To address this issue, DTI is modeled as a regression problem known as drug-target affinity (DTA), which predicts the strength of interactions. While various deep learning methods show competitive results in DTA prediction, they face a challenge in capturing specific drug-target patterns with limited data. To overcome the problem, this study leverages pre-trained language models for enhanced representation. Also, we utilize gated multi-head attention (GMHA), which modifies multi-head attention by including dynamic scaling and a gate process to capture the mutual interactions better. The results show that our proposed method exceeds the benchmark and baseline in all evaluation metrics, with concordance index (CI) of 0.893 and 0.872, and modified r-squared (rm2) of 0.673 and 0.723 in Davis and KIBA. Our findings further suggest that pre-trained language models for drug and target receptor representation improve DTA prediction model performance. Also, the GMHA method generally outperforms the simple concatenation method, with more obvious advantages in more complex datasets like KIBA. Our approach provides a competitive enhancement in DTA prediction, suggesting a promising direction for further enhancing drug discovery and development processes.
Privacy-Preserving U-Net Variants with pseudo-labeling for radiolucent lesion segmentation in dental CBCT Ismail, Amelia Ritahani; Azlan, Faris Farhan; Noormaizan, Khairul Akmal; Afiqa, Nurul; Nisa, Syed Qamrun; Ghazali, Ahmad Badaruddin; Pranolo, Andri; Saifullah, Shoffan
International Journal of Advances in Intelligent Informatics Vol 11, No 2 (2025): May 2025
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v11i2.1529

Abstract

Accurate segmentation of radiolucent lesions in dental Cone-Beam Computed Tomography (CBCT) is vital for enhancing diagnostic reliability and reducing the burden on clinicians. This study proposes a privacy-preserving segmentation framework leveraging multiple U-Net variants—U-Net, DoubleU-Net, U2-Net, and Spatial Attention U-Net (SA-UNet)—to address challenges posed by limited labeled data and patient confidentiality concerns. To safeguard sensitive information, Differential Privacy Stochastic Gradient Descent (DP-SGD) is integrated using TensorFlow-Privacy, achieving a privacy budget of ε ≈ 1.5 with minimal performance degradation. Among the evaluated architectures, U2-Net demonstrates superior segmentation performance with a Dice coefficient of 0.833 and an Intersection over Union (IoU) of 0.881, showing less than 2% reduction under privacy constraints. To mitigate data annotation scarcity, a pseudo-labeling approach is implemented within an MLOps pipeline, enabling semi-supervised learning from unlabeled CBCT images. Over three iterative refinements, the pseudo-labeling strategy reduces validation loss by 14.4% and improves Dice score by 2.6%, demonstrating its effectiveness. Additionally, comparative evaluations reveal that SA-UNet offers competitive accuracy with faster inference time (22 ms per slice), making it suitable for low-resource deployments. The proposed approach presents a scalable and privacy-compliant framework for radiolucent lesion segmentation, supporting clinical decision-making in real-world dental imaging scenarios.
An enhanced pivot-based neural machine translation for low-resource languages Sulistyo, Danang Arbian; Wibawa, Aji Prasetya; Prasetya, Didik Dwi; Ahda, Fadhli Almuíini
International Journal of Advances in Intelligent Informatics Vol 11, No 2 (2025): May 2025
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v11i2.2115

Abstract

This study examines the efficacy of employing Indonesian as an intermediary language to improve the quality of translations from Javanese to Madurese through a pivot-based approach utilizing neural machine translation (NMT). The principal objective of this research is to enhance translation precision and uniformity among these low-resource languages, hence advancing machine translation models for underrepresented languages. The data collecting approach entailed extracting parallel texts from internet sources, followed by pre-processing through tokenization, normalization, and stop-word elimination algorithms. The prepared datasets were utilized to train and assess the NMT models. An intermediary phase utilizing Indonesian is implemented in the translation process to enhance the accuracy and consistency of translations between Javanese and Madurese. Parallel text corpora were created by collecting and preprocessing data, thereafter, utilized to train and assess the NMT models. The pivot-based strategy regularly surpassed direct translation regarding BLEU scores for all n-grams (BLEU-1 to BLEU-4). The enhanced BLEU ratings signify increased precision in vocabulary selection, preservation of context, and overall comprehensibility. This study significantly enhances the current literature in machine translation and computational linguistics, especially for low-resource languages, by illustrating the practical effectiveness of a pivot-based method for augmenting translation precision. The method's dependability and efficacy in producing genuine translations were proved through numerous studies. The pivot-based technique enhances translation quality, although it possesses limitations, including the risk of error propagation and bias originating from the pivot language. Further research is necessary to examine the integration of named entity recognition (NER) to improve accuracy and optimize the intermediate translation process. This project advances the domains of machine translation and the preservation of low-resource languages, with practical implications for multilingual communities, language education resources, and cultural conservation.