cover
Contact Name
-
Contact Email
-
Phone
-
Journal Mail Official
-
Editorial Address
-
Location
Kota yogyakarta,
Daerah istimewa yogyakarta
INDONESIA
International Journal of Advances in Intelligent Informatics
ISSN : 24426571     EISSN : 25483161     DOI : 10.26555
Core Subject : Science,
International journal of advances in intelligent informatics (IJAIN) e-ISSN: 2442-6571 is a peer reviewed open-access journal published three times a year in English-language, provides scientists and engineers throughout the world for the exchange and dissemination of theoretical and practice-oriented papers dealing with advances in intelligent informatics. All the papers are refereed by two international reviewers, accepted papers will be available on line (free access), and no publication fee for authors.
Arjuna Subject : -
Articles 11 Documents
Search results for , issue "Vol 11, No 2 (2025): May 2025" : 11 Documents clear
Enhancing drug-target affinity prediction through pre-trained language model and gated multi-head attention Khoerunnisa, Ghina; Kurniawan, Isman
International Journal of Advances in Intelligent Informatics Vol 11, No 2 (2025): May 2025
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v11i2.1910

Abstract

Drug development requires accurate drug-target interaction (DTI) information to evaluate a drug's potential. However, existing current methods for estimating DTI are slow and expensive. Deep learning offers an efficient and effective alternative by leveraging sequence data for prediction. Nevertheless, the DTI binary classification approach suffers from a large number of non-interacting pairs, resulting in data imbalance and has a negative impact on performance. To address this issue, DTI is modeled as a regression problem known as drug-target affinity (DTA), which predicts the strength of interactions. While various deep learning methods show competitive results in DTA prediction, they face a challenge in capturing specific drug-target patterns with limited data. To overcome the problem, this study leverages pre-trained language models for enhanced representation. Also, we utilize gated multi-head attention (GMHA), which modifies multi-head attention by including dynamic scaling and a gate process to capture the mutual interactions better. The results show that our proposed method exceeds the benchmark and baseline in all evaluation metrics, with concordance index (CI) of 0.893 and 0.872, and modified r-squared (rm2) of 0.673 and 0.723 in Davis and KIBA. Our findings further suggest that pre-trained language models for drug and target receptor representation improve DTA prediction model performance. Also, the GMHA method generally outperforms the simple concatenation method, with more obvious advantages in more complex datasets like KIBA. Our approach provides a competitive enhancement in DTA prediction, suggesting a promising direction for further enhancing drug discovery and development processes.
Privacy-Preserving U-Net Variants with pseudo-labeling for radiolucent lesion segmentation in dental CBCT Ismail, Amelia Ritahani; Azlan, Faris Farhan; Noormaizan, Khairul Akmal; Afiqa, Nurul; Nisa, Syed Qamrun; Ghazali, Ahmad Badaruddin; Pranolo, Andri; Saifullah, Shoffan
International Journal of Advances in Intelligent Informatics Vol 11, No 2 (2025): May 2025
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v11i2.1529

Abstract

Accurate segmentation of radiolucent lesions in dental Cone-Beam Computed Tomography (CBCT) is vital for enhancing diagnostic reliability and reducing the burden on clinicians. This study proposes a privacy-preserving segmentation framework leveraging multiple U-Net variants—U-Net, DoubleU-Net, U2-Net, and Spatial Attention U-Net (SA-UNet)—to address challenges posed by limited labeled data and patient confidentiality concerns. To safeguard sensitive information, Differential Privacy Stochastic Gradient Descent (DP-SGD) is integrated using TensorFlow-Privacy, achieving a privacy budget of ε ≈ 1.5 with minimal performance degradation. Among the evaluated architectures, U2-Net demonstrates superior segmentation performance with a Dice coefficient of 0.833 and an Intersection over Union (IoU) of 0.881, showing less than 2% reduction under privacy constraints. To mitigate data annotation scarcity, a pseudo-labeling approach is implemented within an MLOps pipeline, enabling semi-supervised learning from unlabeled CBCT images. Over three iterative refinements, the pseudo-labeling strategy reduces validation loss by 14.4% and improves Dice score by 2.6%, demonstrating its effectiveness. Additionally, comparative evaluations reveal that SA-UNet offers competitive accuracy with faster inference time (22 ms per slice), making it suitable for low-resource deployments. The proposed approach presents a scalable and privacy-compliant framework for radiolucent lesion segmentation, supporting clinical decision-making in real-world dental imaging scenarios.
An enhanced pivot-based neural machine translation for low-resource languages Sulistyo, Danang Arbian; Wibawa, Aji Prasetya; Prasetya, Didik Dwi; Ahda, Fadhli Almuíini
International Journal of Advances in Intelligent Informatics Vol 11, No 2 (2025): May 2025
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v11i2.2115

Abstract

This study examines the efficacy of employing Indonesian as an intermediary language to improve the quality of translations from Javanese to Madurese through a pivot-based approach utilizing neural machine translation (NMT). The principal objective of this research is to enhance translation precision and uniformity among these low-resource languages, hence advancing machine translation models for underrepresented languages. The data collecting approach entailed extracting parallel texts from internet sources, followed by pre-processing through tokenization, normalization, and stop-word elimination algorithms. The prepared datasets were utilized to train and assess the NMT models. An intermediary phase utilizing Indonesian is implemented in the translation process to enhance the accuracy and consistency of translations between Javanese and Madurese. Parallel text corpora were created by collecting and preprocessing data, thereafter, utilized to train and assess the NMT models. The pivot-based strategy regularly surpassed direct translation regarding BLEU scores for all n-grams (BLEU-1 to BLEU-4). The enhanced BLEU ratings signify increased precision in vocabulary selection, preservation of context, and overall comprehensibility. This study significantly enhances the current literature in machine translation and computational linguistics, especially for low-resource languages, by illustrating the practical effectiveness of a pivot-based method for augmenting translation precision. The method's dependability and efficacy in producing genuine translations were proved through numerous studies. The pivot-based technique enhances translation quality, although it possesses limitations, including the risk of error propagation and bias originating from the pivot language. Further research is necessary to examine the integration of named entity recognition (NER) to improve accuracy and optimize the intermediate translation process. This project advances the domains of machine translation and the preservation of low-resource languages, with practical implications for multilingual communities, language education resources, and cultural conservation.
Enhanced mixup for improved time series analysis Nguyen, Khoa Tho Anh; Nguyen, Khoa; Kim, Taehong; Tran, Ngoc Hong; Dinh, Vinh
International Journal of Advances in Intelligent Informatics Vol 11, No 2 (2025): May 2025
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v11i2.1592

Abstract

Time series data analysis is crucial for real-world applications. While deep learning has advanced in this field, it still faces challenges, such as limited or poor-quality data. In areas like computer vision, data augmentation has been widely used and highly effective in addressing similar issues. However, these techniques are not as commonly explored or applied in the time series domain. This paper addresses the gap by evaluating basic data augmentation techniques using MLP, CNN, and Transformer architectures, prioritized for their alignment with state-of-the-art trends in time series analysis rather than traditional RNN-based methods. The goal is to expand the use of data augmentation in time series analysis. The paper proposed EMixup, which adapts the Mixup method from image processing to time series data. This adaptation involves mixing samples while aiming to maintain the data's temporal structure and integrating target contributions into the loss function. Empirical studies show that EMixup improves the performance of time series models across various architectures (improving 23/24 forecasting cases and 12/24 classification cases). It demonstrates broad applicability and strong results in tasks like forecasting and classification, highlighting its potential utility across diverse time series applications.
Integrating hedge algebras and optimization techniques to reduce forecasting errors in fuzzy time series model Tính, Nghiêm Văn
International Journal of Advances in Intelligent Informatics Vol 11, No 2 (2025): May 2025
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v11i2.1939

Abstract

Accurate forecasting in fuzzy time series (FTS) models is essential for applica-tions such as financial markets, traffic fatalities, and academic enrollments. How-ever, a persistent challenge in FTS forecasting is the determination of optimal interval lengths in the universe of discourse (UD), which significantly impacts prediction accuracy. This study introduces a novel hybrid approach that inte-grates Hedge Algebra (HA) with Particle Swarm Optimization (PSO) and Simu-lated Annealing (SA) to enhance forecasting accuracy. HA enables adaptive, non-uniform interval partitioning based on linguistic semantics, while PSO and SA jointly refine these intervals to reduce forecasting errors. Unlike convention-al FTS models with fixed partitioning, our approach leverages HA’s mathemati-cal structure alongside PSO’s global search and SA’s local refinement to en-hance adaptability and robustness. The model is evaluated on diverse datasets, including enrollment data, traffic fatalities, and gasoline prices, demonstrating superior forecasting accuracy over existing FTS models, as measured by Mean Squared Error (MSE) and Root Mean Squared Error (RMSE).
A genetic algorithm approach to green vehicle routing: Optimizing vehicle allocation and route planning for perishable products Asih, Hayati Mukti; Leuveano, Raden Achmad Chairdino; Dharmawan, Dhimas Arief; Ardiansyah, Ardiansyah
International Journal of Advances in Intelligent Informatics Vol 11, No 2 (2025): May 2025
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v11i2.1784

Abstract

This paper introduces a novel approach to the Green Vehicle Routing Problem (GVRP) by integrating multiple trips, heterogeneous vehicles, and time windows, specifically applied to the distribution of bakery products. The primary objective of the proposed model is to optimize route planning and vehicle allocation, aiming to minimize transportation costs and carbon emissions while maximizing product quality upon delivery to retailers. Utilizing a Genetic Algorithm (GA), the model demonstrates its effectiveness in achieving near-optimal solutions that balance economic, environmental, and quality-focused goals. Empirical results reveal a total transportation cost of Rp. 856,458.12, carbon emissions of 365.43 kgCO2e, and an impressive average product quality of 99.90% across all vehicle trips. These findings underscore the capability of the model to efficiently navigate the complexities of real-world logistics while maintaining high standards of product delivery. The proposed GVRP model serves as a valuable tool for industries seeking sustainable and cost-effective distribution strategies, with implications for broader advancements in supply chain management.
LUNGINFORMER: A Multiclass of lung pneumonia diseases detection based on chest X-ray image using contrast enhancement and hybridization inceptionresnet and transformer Hanafi, Hanafi
International Journal of Advances in Intelligent Informatics Vol 11, No 2 (2025): May 2025
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v11i2.1964

Abstract

Lung pneumonia is categorically a serious disease on Earth. In December 2019, COVID-19 was first identified in Wuhan, China. COVID-19 caused severe lung pneumonia. The majority of lung pneumonia diseases are diagnosed using traditional medical tools and specialized medical personnel. This process is both time-consuming and expensive. To address the problem, many researchers have employed deep learning algorithms to develop an automated detection system for pneumonia. Deep learning faces the issue of low-quality X-ray images and biased X-ray image information. The X-ray image is the primary material for creating a transfer learning model. The problem in the dataset led to inaccurate classification results. Many previous works with a deep learning approach have faced inaccurate results. To address the situation mentioned, we propose a novel framework that utilizes two essential mechanisms: advanced image contrast enhancement based on Contrast Limited Adaptive Histogram Equalization (CLAHE) and a hybrid deep learning model combining InceptionResNet and Transformer. Our novel framework is named LUNGINFORMER. The experiment report demonstrated LUNGINFORMER achieved an accuracy of 0.98, a recall of 0.97, an F1-score of 0.98, and a precision of 0.96. According to the AUC test, LUNGINFORMER achieved a tremendous performance with a score of 1.00 for each class. We believed that our performance model was influenced by contrast enhancement and a hybrid deep learning model.
Semantic-BERT and semantic-FastText model for education question classification Soares, Teotino Gomes; Azhari, Azhari; Rohkman, Nur
International Journal of Advances in Intelligent Informatics Vol 11, No 2 (2025): May 2025
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v11i2.1955

Abstract

Question classification (QC) is critical in an educational question-answering (QA) system. However, most existing models suffer from limited semantic accuracy, particularly when dealing with complex or ambiguous education queries. The problem lies in their reliance on surface-level features, such as keyword matching, which hampers their ability to capture deeper syntactic and semantic relationship in question. This results in misclassification and generic responses that fail to address the specific intent of prospective students. This study addresses, this gap by integrating semantic dependency parsing into Semantic-BERT (S-BERT) and Semantic-FastText (S-FastText) to enhance question classification performance. Semantic dependency parsing is applied to structure the semantics of interrogative sentences before classification processing by BERT and FastText. A dataset of 2,173 educational questions covering five question classes (5W1H) is used for training and validation. The model evaluation uses a confusion matrix and K-Fold cross-validation, ensuring robust performance assessment. Experimental results show that both models achieve 100% accuracy, precision, and recall in classifying question sentences, demonstrating their effectiveness in educational question classification. These findings contribute to the development of intelligent educational assistants, paving the way for more efficient and accurate automated question-answering systems in academic environments.
Student Major Subject Prediction Model for Real-Application Using Neural Network Islam, Aminul; Hoque, Jesmeen Mohd Zebaral; Hossen, Md. Jakir; Basiron, Halizah; Tawsif Khan, Chy. Mohammed
International Journal of Advances in Intelligent Informatics Vol 11, No 2 (2025): May 2025
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v11i2.1490

Abstract

The university admission test is an arena for students in Bangladesh. Millions of students have passed the higher secondary school every year, and only limited government medical, engineering, and public universities are available to pursue their further study. It is challenging for a student to prepare all these three categories simultaneously within a short period in such a competitive environment. Selecting the correct category according to the student's capability became important rather than following the trend. This study developed a preliminary system to predict a suitable admission test category by evaluating students' early academic performance through data collecting, data preprocessing, data modelling, model selection, and finally, integrating the trained model into the real system. Eventually, the Neural Network was selected with the maximum 97.13% prediction accuracy through a systematic process of comparing with three other machine learning models using the RapidMiner data modeling tool. Finally, the trained Neural Network model has been implemented by the Python programming language for opinionating the possible option to focus as a major for admission test candidates in Bangladesh.
Detection and classification of lung diseases in distributed environment Phan, Thuong-Cang; Phan, Anh-Cang
International Journal of Advances in Intelligent Informatics Vol 11, No 2 (2025): May 2025
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v11i2.1828

Abstract

A significant increase in the size of the medical data, as well as the complexity of medical diagnosis, poses challenges to processing this data in a reasonable time. The use of big data is expected to have the upper hand in managing the large-scale datasets. This research presents the detection and prediction of lung diseases using big data and deep learning techniques. In this work, we train neural networks based on Faster R-CNN and RetinaNet with different backbones (ResNet, CheXNet, and Inception ResNet V2) for lung disease classification in a distributed and parallel processing environment. Moreover, we also experimented with three new network architectures on the medical image dataset: CTXNet, Big Transfer (BiT), and Swin Transformer, to evaluate their accuracy and training time in a distributed environment. We provide ten scenarios in two types of processing environments to compare and find the most promising scenarios that can be used for the detection of lung diseases on chest X-rays. The results show that the proposed method can accurately detect and classify lung lesions on chest X-rays with an accuracy of up to 96%. Additionally, we use Grad-CAM to highlight lung lesions, thus radiologists can clearly see the lesions’ location and size without much effort. The proposed method allows for reducing the costs of time, space, and computing resources. It will be of great significance to reduce workloads, increase the capacity of medical examinations, and improve health facilities.

Page 1 of 2 | Total Record : 11