cover
Contact Name
Afril Efan Pajri
Contact Email
ejurnal.tdinusofficial@jurnal.tdinus.com
Phone
-
Journal Mail Official
ejurnal.tdinusofficial@jurnal.tdinus.com
Editorial Address
JOURNAL OFFICIAL Indonesian Applied Research Computing and Informatics Official Publisher by PT. TERAS DIGITAL NUSANTARA KOTA BIMA, INDONESIA
Location
Kota bima,
Nusa tenggara barat
INDONESIA
Indonesian Applied Research Computing and Informatics
ISSN : -     EISSN : 31108806     DOI : https://doi.org/10.64479/iarci
Focus and Scope Indonesian Applied Research Computing and Informatics Indonesian Applied Research Computing and Informatics is a scientific journal that publishes applied research in the fields of computing and informatics. The journal aims to serve as a platform for academics, researchers, and practitioners to disseminate innovative, practical, and impactful technology-based solutions, particularly in the context of advancing science and technology in Indonesia. Scope of Topics Artificial Intelligence and Machine Learning Information Systems and Databases Cloud and Distributed Computing Image and Signal Processing Web and Mobile Technologies Software Engineering Intelligent Systems and Expert Systems Internet of Things (IoT) Cybersecurity and Cryptography Big Data and Analytics
Articles 5 Documents
Search results for , issue "Vol. 1 No. 2: December (2025)" : 5 Documents clear
Multi-Scale Convolutional Neural Network-Based Classification of Tuberculosis Chest X-ray Images M Ridwan; syafrudin _; Sahrul Fauzan Djiaulhaq; Siti Mutmainah; Teguh Ansyor Lorosae
Indonesian Applied Research Computing and Informatics Vol. 1 No. 2: December (2025)
Publisher : PT. Teras Digital Nusantara

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.64479/iarci.v1i2.60

Abstract

Tuberculosis (TB) is an infectious disease caused by the bacterium Mycobacterium tuberculosis, which mainly attacks the lung organs. One of the most commonly used methods of TB diagnosis is thorax X-ray imaging. The images of the examination results are visually analyzed by medical personnel to identify certain patterns or characteristics that indicate TB disease. However, the manual analysis process takes time and depends on the doctor's experience. Therefore, this study utilizes Artificial Intelligence (AI) technology as a diagnostic tool to provide alternative solutions that are faster and more efficient in determining TB status in patients. This study proposes the use of the Multi-Scale Convolutional Neural Network (CNN) method to classify tuberculosis disease based on thorax X-ray images. The data used was in the form of lung X-ray images that acted as inputs at the image processing stage. The dataset collected consisted of 790 images divided into two classes, namely normal lungs and lungs indicated by tuberculosis. The CNN architecture includes three convolutional layers with a kernel size of 3×3, three max pooling layers  of 2×2, and one fully connected layer with a softmax activation function. Each convolutional layer uses 128 filters, and the model learning process is optimized using the Adam Optimizer algorithm. The training process was carried out for 15 epochs and resulted in an accuracy rate of 81%. Furthermore, at the model evaluation stage, an accuracy of 79% was obtained, indicating that the proposed method has sufficient performance in classifying tuberculosis disease.
Impact of Data Normalization on K-Nearest Neighbor Classification Performance: A Case Study on Date Fruit Dataset Muhammad Jauhar Vikri; Afril Efan Pajri; Putri Liana
Indonesian Applied Research Computing and Informatics Vol. 1 No. 2: December (2025)
Publisher : PT. Teras Digital Nusantara

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.64479/iarci.v1i2.61

Abstract

Data normalization is a crucial preprocessing step for distance-based classification algorithms such as K-Nearest Neighbor (KNN), as differences in feature scales can significantly affect distance calculations and classification accuracy. This study investigates the impact of data normalization on KNN classification performance using the Date Fruit Dataset as a case study. Three preprocessing scenarios are evaluated: raw data without normalization, Min–Max normalization, and Z-score standardization. In addition, the performance of standard KNN is compared with distance-weighted KNN to assess the contribution of distance weighting under different preprocessing conditions. The experiments are conducted using stratified 10-fold cross-validation, and model performance is evaluated using accuracy and standard deviation. Statistical significance of performance differences is examined using paired t-test, and sensitivity analysis is performed to analyze the effect of varying the number of nearest neighbors. The results show that data normalization leads to a substantial improvement in classification performance compared to raw data. Z-score standardization achieves the highest and most stable accuracy, followed by Min–Max normalization. Distance-weighted KNN consistently produces slightly higher accuracy than standard KNN; however, the improvement is not statistically significant after normalization. Sensitivity analysis indicates that normalized data results in a wider and more stable range of optimal k values. These findings demonstrate that data normalization plays a more dominant role than distance weighting in improving KNN performance. The study provides empirical evidence that proper preprocessing is essential for reliable KNN-based classification and establishes a robust baseline for further enhancements such as feature weighting and metaheuristic optimization.
MobileNetV2 Transfer Learning Implementation for Waste Classification Fifi Andriani; Ade Yuliati; Anis Yaturahmah; Siti Mutmainah
Indonesian Applied Research Computing and Informatics Vol. 1 No. 2: December (2025)
Publisher : PT. Teras Digital Nusantara

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.64479/iarci.v1i2.62

Abstract

Waste management issues represent one of the major challenges in maintaining environmental sustainability, as the waste sorting process is still largely performed manually, requiring significant time and effort and relying heavily on human accuracy, which makes it inefficient and prone to errors. Therefore, this study utilizes Artificial Intelligence (AI) technology as a solution to support more effective and sustainable environmental management by proposing the use of the Convolutional Neural Network (CNN) algorithm to classify waste types based on digital images. The data used consist of waste images as inputs in the image processing stage, which are then classified into several waste categories. The CNN architecture applied consists of multiple convolutional layers with a kernel size of 3×3, max pooling layers for feature extraction, and a fully connected layer with a softmax activation function to determine the output class, while the model training process is optimized using the Adam Optimizer algorithm. The experimental results demonstrate that the proposed CNN model is capable of classifying waste types with a good level of accuracy, indicating that this AI-based approach can serve as an effective supporting solution for intelligent, efficient, and sustainable waste management systems and contribute to environmental conservation efforts.
Unsupervised Credit Card Fraud Detection Using Autoencoder-Based Anomaly Detection on Highly Imbalanced Transaction Data Mursalim Mursalim; Sutriawan Sutriawan; Nimas Ratna Sari; Nur Wahyu Hidayat; Zumhur Alamin
Indonesian Applied Research Computing and Informatics Vol. 1 No. 2: December (2025)
Publisher : PT. Teras Digital Nusantara

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.64479/iarci.v1i2.64

Abstract

Credit card fraud detection is a critical problem in the financial sector, primarily due to its direct correlation with financial liability and the preservation of user integrity. A major challenge in fraud detection is the extreme class imbalance, where fraudulent transactions are rare compared to legitimate ones, causing supervised approaches to require sufficient labeled fraud data and often become biased toward the majority class. This study proposes an unsupervised anomaly detection approach based on an Autoencoder to identify fraudulent transactions in highly imbalanced credit card transaction data. The Autoencoder is trained exclusively on normal transactions to learn representative patterns of legitimate behavior. During inference phase, transactions exhibiting elevated reconstruction error relative to established norms are designated as anomalies, indicative of potential fraud. The experiments use the Credit Card Fraud Detection dataset from Kaggle, containing 284,807 transactions: 284,315 normal (99.828%) and 492 fraudulent (0.172%). The workflow includes numerical feature normalization for the Time and Amount attributes, splitting normal data into training and validation sets, selecting an anomaly threshold based on the reconstruction error distribution, and evaluating performance using metrics suitable for imbalanced data such as precision, recall, and F1-score. The results indicate that the proposed unsupervised Autoencoder offers an effective alternative when labeled fraud examples are limited, by detecting deviations from normal transaction patterns through reconstruction behavior.
Deep Learning-Based Software Defect Detection: A Comparative Study of Neural Network Architectures Linda Marlinda; Gilang Mahendra; Ade Kurniawan; Doni Ramdhani; Irma Eryanti Putri; Miftahul Jannah
Indonesian Applied Research Computing and Informatics Vol. 1 No. 2: December (2025)
Publisher : PT. Teras Digital Nusantara

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.64479/iarci.v1i2.65

Abstract

Software defect prediction plays a crucial role in software quality assurance by enabling early identification of defect-prone modules, thereby reducing testing effort and improving software reliability. This study presents a comprehensive comparative analysis of three widely used deep learning architectures Multi-Layer Perceptron (MLP), Convolutional Neural Network (CNN), and Long Short-Term Memory (LSTM) for software defect prediction under identical experimental conditions. A systematic seven-phase framework was employed, covering data collection, preprocessing, feature engineering, model implementation, training, validation, and comparative evaluation using twelve datasets from the NASA Metrics Data Program. Experimental results indicate that the LSTM architecture consistently outperforms CNN and MLP, achieving an average accuracy of 93.5%, precision of 94.2%, recall of 93.1%, F1-score of 93.6%, and ROC-AUC of 0.947 across all datasets. Statistical significance analysis using Friedman and Wilcoxon signed-rank tests confirms that the performance improvements of LSTM are statistically significant (p < 0.001) with large effect sizes. Furthermore, cross-dataset evaluation demonstrates that LSTM exhibits superior generalization capability, with a smaller average accuracy degradation compared to CNN and MLP. The study also highlights important trade-offs between predictive performance and computational efficiency, providing practical guidance for architecture selection in real-world software defect prediction systems. These findings contribute empirical insights and deployment-oriented recommendations for advancing automated software quality assurance.

Page 1 of 1 | Total Record : 5