cover
Contact Name
-
Contact Email
-
Phone
-
Journal Mail Official
-
Editorial Address
-
Location
,
INDONESIA
JOURNAL OF APPLIED INFORMATICS AND COMPUTING
ISSN : -     EISSN : 25486861     DOI : 10.3087
Core Subject : Science,
Journal of Applied Informatics and Computing (JAIC) Volume 2, Nomor 1, Juli 2018. Berisi tulisan yang diangkat dari hasil penelitian di bidang Teknologi Informatika dan Komputer Terapan dengan e-ISSN: 2548-9828. Terdapat 3 artikel yang telah ditelaah secara substansial oleh tim editorial dan reviewer.
Arjuna Subject : -
Articles 695 Documents
Implementation of Conditional WGAN-GP, ResNet50V2, and HDBSCAN for Generating and Recommending Traditional Lombok Songket Motifs Akbar, Ardiyallah; Karim, Muh Nasirudin; Imran, Bahtiar
Journal of Applied Informatics and Computing Vol. 9 No. 5 (2025): October 2025
Publisher : Politeknik Negeri Batam

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30871/jaic.v9i5.10894

Abstract

Songket is a traditional Indonesian woven textile with profound cultural and aesthetic value, particularly in Lombok, where artisans continue to preserve its distinctive motifs. However, the creation of new designs is still carried out manually, requiring considerable time and relying heavily on the artisans’ creativity. This study proposes an integrated system that combines Conditional Wasserstein Generative Adversarial Network with Gradient Penalty (CWGAN-GP), ResNet50V2, and HDBSCAN to automatically generate and recommend Lombok’s traditional songket motifs. The dataset consists of primary data collected directly from local artisans and secondary data from the BatikNitik public repository, thereby providing authentic yet diverse motif samples for training. CWGAN-GP is employed to synthesize motifs with stable and realistic structures across multiple epochs. Subsequently, ResNet50V2 is utilized for deep visual feature extraction, HDBSCAN for density-based clustering, and UMAP for two-dimensional visualization of motif distribution. The system successfully groups motifs into meaningful clusters, with the largest cluster containing consistent patterns of high aesthetic value. A recommendation mechanism is also developed to suggest up to five similar motifs from the original dataset within the same cluster, ensuring cultural relevance while fostering design innovation. Despite these promising outcomes, several limitations remain, such as the relatively small number of songket motif samples, variations in motif quality, and challenges during data collection including inconsistent lighting and non-uniform patterns. These factors affect both dataset consistency and generative performance. Nevertheless, this approach demonstrates the potential of artificial intelligence to support the preservation and innovation of cultural heritage by assisting artisans in creating and exploring new motifs more efficiently without losing their traditional identity.
Gaussian Mixture-Based Data Augmentation Improves QSAR Prediction of Corrosion Inhibition Efficiency Ignasius, Darnell; Akrom, Muhamad; Budi, Setyo
Journal of Applied Informatics and Computing Vol. 9 No. 5 (2025): October 2025
Publisher : Politeknik Negeri Batam

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30871/jaic.v9i5.10895

Abstract

Predicting corrosion inhibition efficiency IE (%) is often hindered by small, heterogeneous datasets. This study proposes a Gaussian mixture–based data augmentation pipeline to strengthen QSAR generalization under data scarcity. A curated set of 70 drug-like compounds with 14 physicochemical and quantum descriptors was cleaned, split 90/10 (train/test), and transformed using a Quantile Transformer followed by a Robust Scaler. A Gaussian Mixture model (GMM) with 2–5 components selected by the variational lower bound was fitted to the transformed training features and used to generate up to 2,500 synthetic samples. Eight regressors (Gaussian Process, Decision Tree, Random Forest, Bagging, Gradient Boosting, Extra Trees, SVR, and KNN) were evaluated on the held-out test set using R2 and RMSE. Augmentation improved performance across several families: for example, Gaussian Process R2 improved from −1.54 to 0.54 (RMSE 11.71 to 5.01) and Decision Tree R2 from −0.33 to 0.63 (RMSE 8.48 to 4.44), Bagging and Random Forest showed R2 increases of 0.67 and 0.40, respectively. The optimal synthetic size varied by model.
Anemia Classification with Clinical Feature Engineering and SHAP Interpretation Amalia, Ikhlasul; Rumini, Rumini
Journal of Applied Informatics and Computing Vol. 9 No. 5 (2025): October 2025
Publisher : Politeknik Negeri Batam

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30871/jaic.v9i5.10912

Abstract

Anemia is a global health issue that has a significant impact on quality of life and productivity. Early and accurate detection is essential to prevent more serious complications. This study aims to develop an anemia classification model based on machine learning technology using the XGBoost algorithm, as well as compare its performance with Logistic Regression and Random Forest methods. The dataset used in this study was obtained from the Kaggle platform, consisting of 1,421 samples and six clinical attributes, namely Gender, Hemoglobin (HGB), Mean Corpuscular Hemoglobin (MCH), Mean Corpuscular Hemoglobin Concentration (MCHC), Mean Corpuscular Volume (MCV), Result. During the feature engineering process, the derived feature of the hemoglobin-to-MCV ratio (Hb/MCV) was added, which is medically relevant in distinguishing types of anemia. Evaluation results showed that XGBoost and Random Forest achieved an accuracy rate and F1-Score of 100%, while Logistic Regression achieved a rate of 98.9%. XGBoost was selected as the primary model due to its efficient computational capabilities and support for interpretation using SHAP (SHapley Additive exPlanations). SHAP visualization revealed that the Hb/MCV ratio and hemoglobin were the most influential features in classification. This model has the potential to be used as a decision support system for automated anemia screening and can be further integrated into clinical systems.
Fuzzy Logic and Neural Network-Based Self-Tuning PID for Vacuum Pressure Stabilization Sanjaya, Berza H.; Pujiyanta, Ardi; Puriyanto, Riky Dwi
Journal of Applied Informatics and Computing Vol. 9 No. 5 (2025): October 2025
Publisher : Politeknik Negeri Batam

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30871/jaic.v9i5.10945

Abstract

The conventional PID controller is widely used for vacuum pressure control; however, it has limitations when faced with nonlinear system characteristics and external disturbances, leading to a decline in performance. Several previous studies have proposed the integration of PID with intelligent methods, such as neural networks or fuzzy logic separately. Nevertheless, these singular approaches still encounter limitations in terms of adaptability and robustness. This study aims to develop a self-tuning PID method based on the combination of Neural Networks (NN) and Fuzzy Inference Systems (FIS) to enhance the stability and accuracy of vacuum pressure control. A nonlinear vacuum system plant model is constructed within the Simulink environment to generate a dataset used for training the NN with the Levenberg-Marquardt algorithm. The NN is employed to predict changes in PID parameters adaptively, while the FIS provides fine corrections to strengthen system stability. Simulation results demonstrate that the proposed approach effectively reduces overshoot from 36.47% to 31.51%, decreases steady-state error from 0.069 to 0.052, and lowers the RMSE value from 0.125 to 0.108 compared to conventional PID. Thus, the integration of NN and FIS within the self-tuning mechanism proves to be more effective in addressing nonlinear dynamics and external disturbances, resulting in a more stable and accurate system response.
Comparison of Support Vector Regression and Extreme Learning Machine Methods for Predicting Bitcoin Prices Ferdinand, Felix; Anthony , Ryan; Jason Winata, Tanjaya; Sutanto, Jason; Souwiko, Richard; Fernando, Christian
Journal of Applied Informatics and Computing Vol. 9 No. 5 (2025): October 2025
Publisher : Politeknik Negeri Batam

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30871/jaic.v9i5.10983

Abstract

Bitcoin can be used for transactions, mining, and investments. Transactions with Bitcoin are highly secure with the help of Bitcoin miner validation. Miners who validate transactions are rewarded with Bitcoins which then adds supply to the Bitcoin network. However, over time, these rewards will run out. The depletion of Bitcoin supply can affect the price of Bitcoin. In addition, investing in Bitcoin is very risky with the fluctuating price of Bitcoin. Therefore, it is necessary to predict the price. In this research, prediction is done using Support Vector Regression (SVR) and Extreme Learning Machine (ELM). The dataset for Bitcoin price (USD) comes from Yahoo Finance. The types of Bitcoin prices predicted are Open, High, Low, and Close prices. Across all series and both splits, ELM outperforms SVR. Under the 80/20 split, the average error of ELM is MAE 418.698 USD, RMSE 633.953 USD, R² of 0.987, versus SVR’s MAE 1061.449 USD, RMSE 1227.499 USD, R² of 0.955. A reduction of 60.6% (MAE) and 48.4% (RMSE). With the 60/40 split, ELM remains strong (MAE 550.783 USD, RMSE 850.656 USD, R² 0.989 while SVR deteriorates (MAE 1843.534 USD, RMSE 2093.542 USD, R² of 0.935, yielding 70.1% and 59.4% average reductions in MAE and RMSE, respectively. ELM consistently tracks both levels and day to day movements, with typical errors of only a few hundred dollars. These results indicate that ELM is the more reliable choice and is capable of capturing non-linearities for Bitcoin price prediction.
Enhancing Liver Cirrhosis Staging Accuracy using Optuna-Optimized TabNet Arifin, Muhammad Farhan; Dewi, Ika Novita; Salam, Abu; Utomo, Danang Wahyu; Rakasiwi, Sindhu
Journal of Applied Informatics and Computing Vol. 9 No. 5 (2025): October 2025
Publisher : Politeknik Negeri Batam

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30871/jaic.v9i5.11011

Abstract

Liver cirrhosis is a progressive chronic disease whose early detection poses a clinical challenge, making accurate severity staging crucial for patient management. This research proposes and evaluates a TabNet deep learning model, specifically designed for tabular data, to address this challenge. In the initial evaluation, a baseline TabNet model with its default configuration achieved a baseline accuracy of 65.11% on a public clinical dataset. To enhance performance, hyperparameter optimization using Optuna was implemented, which successfully increased the accuracy significantly to 70.37%, with precision, recall, and F1-score metrics each reaching 70%. The model's discriminative ability was also validated as reliable in multiclass classification through AUC metric evaluation. In addition to accuracy improvements, the model's interpretability was validated through the identification of key predictive features such as Prothrombin and Hepatomegaly, which align with clinical indicators. This study demonstrates that Optuna-optimized TabNet is an effective and interpretable approach, possessing significant potential for integration into clinical decision support systems to support a more precise diagnosis of liver cirrhosis.
Multi-Modal Sensor Integration in Smart Rooms to Optimize Internet of Things-Based Monitoring and Security Control of Autistic Child Detection Activities Taufiq, Arfah; Sahibu, Supriadi; Jalil, Abdul
Journal of Applied Informatics and Computing Vol. 9 No. 5 (2025): October 2025
Publisher : Politeknik Negeri Batam

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30871/jaic.v9i5.11013

Abstract

The advancement of Internet of Things (IoT) technology has opened new opportunities for automated monitoring systems, especially for children with Autism Spectrum Disorder (ASD). These children require intensive supervision due to communication limitations and unpredictable behavior. This study aims to design and implement a smart room system integrated with multi-modal sensors to monitor autistic children's activities in real time.Using a Research and Development (R&D) approach with the ADDIE model, the system was developed with an ESP32 microcontroller and sensors including PIR (motion), DHT22 (temperature), microphone (sound), and LDR (light). The Mamdani fuzzy logic algorithm processes sensor data to classify safety levels. Data is visualized and notified via the Blynk platform.Test results show the system effectively detects "safe," "needs attention," and "critical" conditions with high accuracy, providing timely alerts for parents. This solution enhances home-based supervision and offers a practical, IoT-based approach to child safety and care.
Classification of Foot Wound Severity in Type 2 Diabetes Mellitus Patients Using MobileNetV2-Based Convolutional Neural Network Fitriah, Nurul; Sriani, Sriani
Journal of Applied Informatics and Computing Vol. 9 No. 5 (2025): October 2025
Publisher : Politeknik Negeri Batam

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30871/jaic.v9i5.11015

Abstract

Diabetic Foot Ulcer (DFU) is a serious complication in Type 2 Diabetes Mellitus patients that may lead to amputation if not properly treated. This study employs the MobileNetV2 architecture based on Convolutional Neural Network (CNN) to classify DFU severity into two categories: severe and non-severe. The dataset consists of 1,000 images, divided into 70% training, 20% validation, and 10% testing. Data preprocessing was performed using normalization, augmentation (rotation, flipping, zooming), and dataset balancing to enhance model generalization. The model was trained for 10 epochs with a batch size of 32, learning rate of 0.001, and Adam optimizer. Experimental results show 98% accuracy on validation data with an average precision, recall, and F1-score of 0.98. On the testing stage, the model achieved 94% accuracy with an average precision, recall, and F1-score of 0.94. The confusion matrix also indicates strong performance in distinguishing both classes. This study demonstrates that MobileNetV2-based CNN with proper preprocessing and hyperparameter settings can serve as an effective supporting method for early DFU severity classification, thereby improving the speed and accuracy of medical decision-making.
Face Recognition Using MTCNN Face Detection, ResNetV1 Feature Embeddings, and SVM Classification Pratama, Ivan Putra; Ningrum, Novita Kurnia
Journal of Applied Informatics and Computing Vol. 9 No. 5 (2025): October 2025
Publisher : Politeknik Negeri Batam

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30871/jaic.v9i5.11016

Abstract

Face recognition has become an essential component of modern security and authentication systems, yet its effectiveness is often challenged by limited datasets, class imbalance, variations in facial poses, lighting conditions, and image resolutions. This study proposes a face recognition pipeline that integrates Multi-task Cascaded Convolutional Networks (MTCNN) for face detection, Residual Network V1 (ResNetV1) for feature extraction, and Support Vector Machine (SVM) for classification. Unlike previous works that rely on large-scale datasets and end-to-end deep learning models, this study emphasizes the effectiveness of the pipeline under constrained data conditions, using 856 images across 191 classes with highly imbalanced distribution. Experimental results show that MTCNN successfully detected 97.1% of faces, while ResNetV1 produced 512-dimensional embeddings that formed well-separated clusters validated by clustering metrics (Silhouette Score = 0.578, Davies-Bouldin Index = 0.566). The SVM classifier achieved 92.9% accuracy, with macro-average precision, recall, and F1-scores of 0.89, 0.92, and 0.89 respectively, significantly outperforming a baseline k-Nearest Neighbor (k-NN) model that only reached 63.9% accuracy. These findings highlight the novelty of this study: demonstrating that a lightweight yet robust pipeline can deliver reliable recognition performance even in small, imbalanced datasets, making it suitable for real-world scenarios where large-scale training data are not available.
Optimizing LoRa Gateway Placement for Marine Buoy Monitoring Using Particle Swarm Optimization (PSO) Nihayatus Saadah; Faridatun Nadziroh; Nailul Muna; Karimatun Nisa’; Aries Pratiarso; I Gede Puja Astawa; Tri Budi Santoso; Sultan Syahputra Yulianto; Ahmad Baihaqi Adi Putro
Journal of Applied Informatics and Computing Vol. 9 No. 5 (2025): October 2025
Publisher : Politeknik Negeri Batam

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30871/jaic.v9i5.11026

Abstract

Effective marine environmental monitoring is critical for ensuring navigational safety, with LoRa technology emerging as a promising solution due to its long-range, low-power capabilities. However, the performance of LoRa networks heavily depends on strategic gateway placement, a task often performed manually, leading to suboptimal coverage. This study addresses this challenge by implementing and validating a Particle Swarm Optimization (PSO) algorithm to determine the optimal placement of gateways for a real-world network of 157 marine buoys in the Madura Strait. The PSO algorithm, configured with 30 particles and 100 iterations, was benchmarked against a baseline manual selection method based on geographic centrality. Results demonstrate a significant performance gain: the PSO-optimized configuration achieved 100% network coverage (157 buoys), a 34.2% increase over the 117 buoys covered by the manual method. These findings confirm that employing PSO for gateway placement substantially enhances network efficiency and data reliability, highlighting its value for creating robust and scalable marine IoT applications.