cover
Contact Name
-
Contact Email
-
Phone
-
Journal Mail Official
-
Editorial Address
-
Location
,
INDONESIA
JOURNAL OF APPLIED INFORMATICS AND COMPUTING
ISSN : -     EISSN : 25486861     DOI : 10.3087
Core Subject : Science,
Journal of Applied Informatics and Computing (JAIC) Volume 2, Nomor 1, Juli 2018. Berisi tulisan yang diangkat dari hasil penelitian di bidang Teknologi Informatika dan Komputer Terapan dengan e-ISSN: 2548-9828. Terdapat 3 artikel yang telah ditelaah secara substansial oleh tim editorial dan reviewer.
Arjuna Subject : -
Articles 695 Documents
Performance of Multivariate Missing Data Imputation Methods on Climate Data Widyawati, Amalia Safira; Fitrianto, Anwar; Silvianti, Pika
Journal of Applied Informatics and Computing Vol. 9 No. 6 (2025): December 2025
Publisher : Politeknik Negeri Batam

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30871/jaic.v9i6.11316

Abstract

Climate data plays an important role in various aspects of life. However, missing data is often found, which can interfere with data processing and reduce the quality of analysis. Therefore, appropriate handling methods are needed to ensure that the analysis results remain valid. This study aims to compare the performance of several imputation methods for missing multivariate data based on the identification of actual missing data patterns, and to determine the appropriate imputation method based on the mechanism of missing data. This study also aims to apply the best method to data with actual missing data patterns to assess its effect on descriptive statistical changes required for further climatological analysis. The methods used include monthly averages, missRanger, k-Nearest Neighbor (k-NN), and Iterative Robust-Model Imputation (IRMI). The missing data information was obtained from Global Surface Summary of the Day (GSOD) data, namely temperature, precipitation, humidity, pressure, and wind speed variables with a daily frequency for 11 years, with a missing data proportion of 11.4%. The missing data patterns were then applied to relatively complete NASA Power data to evaluate the imputation results. The results show that IRMI is less capable of handling extreme missing data conditions, namely 17 completely missing rows. In contrast, k-NN, missRanger, and monthly averages provided better results in both extreme and non-extreme conditions. Of the four methods, monthly averages were chosen because they were able to overcome missing data while maintaining multivariate structure with 58% on sMAPE and 2.64% on relative difference.
Comparative Analysis of 1D CNN Architectures for Guitar Chord Recognition from Static Hand Landmarks Naya, Rafi Abhista; Tanuwijaya, Evan
Journal of Applied Informatics and Computing Vol. 9 No. 6 (2025): December 2025
Publisher : Politeknik Negeri Batam

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30871/jaic.v9i6.11339

Abstract

Vision-based guitar chord recognition offers a promising alternative to traditional audio-driven methods, particularly for silent practice, classroom environments, and interactive learning applications. While existing research predominantly relies on full-frame image analysis using 2D convolutional networks, the use of structured hand landmarks remains underexplored despite their advantages in robustness and computational efficiency. This study presents a comprehensive comparative analysis of three one-dimensional convolutional neural network architectures—CNN-1D, ResNet-1D, and Inception-1D—for classifying seven guitar chord types using 63-dimensional static hand-landmark vectors extracted via MediaPipe Hands. The methodology encompasses extensive dataset preprocessing, targeted landmark augmentation, Bayesian hyperparameter optimization, and stratified 5-fold cross-validation. Results show that CNN-1D achieves the highest mean accuracy (97.61%), outperforming both ResNet-1D and Inception-1D, with statistical tests confirming significant improvements over ResNet-1D. Robustness experiments further demonstrate that CNN-1D maintains superior resilience under Gaussian noise, landmark occlusion, and geometric scaling. Additionally, CNN-1D provides the fastest inference and most stable computational performance, making it highly suitable for real-time or mobile deployment. These findings highlight that, for structured and low-dimensional landmark data, simpler convolutional architectures outperform deeper or multi-branch designs, offering an efficient and reliable solution for vision-based guitar chord recognition.
Ethical Analysis of Online Media Journalistic Photos Worth Publishing Based on Images Using the Convolutional Neural Network Method Rijal, Syamsul; Mardin, Aslam; Anas, Anas; Sharif, Tirta Chiantalia; Sunardi, Sunardi
Journal of Applied Informatics and Computing Vol. 9 No. 6 (2025): December 2025
Publisher : Politeknik Negeri Batam

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30871/jaic.v9i6.11349

Abstract

This study aims to develop and test a Convolutional Neural Network (CNN)-based artificial intelligence model to analyze and classify online media journalistic photos based on ethical criteria for publication suitability (suitable or unsuitable). In the context of digital journalism, the process of filtering sensitive visual content that potentially violates the code of ethics is often time-consuming and prone to subjectivity. Therefore, a CNN model is proposed as an automated solution to identify images containing visual elements deemed unethical. An annotated image dataset was used to train and test the CNN model. The model test results showed effective and robust performance in classifying the ethical suitability of photos. The model achieved a weighted average accuracy of 0.86 (86%) and a weighted average F1 - score of 0.86. Specifically, the model performed very well in identifying "suitable" photos with precision, recall, and F1- score values ranging from 0.88 to 0.89. Performance in the "Unsuitable" class was also relatively strong with an F1 - score of 0.81. Overall, these results confirm that the CNN method has great potential as an efficient and objective decision support system in the visual content editing process. Implementing this model not only speeds up the editorial process but also improves online media's adherence to journalistic ethical standards by minimizing the risk of publishing potentially unethical photos.
Enhancing Aspect-Based Sentiment Analysis via Hugging Face Fine-Tuned IndoBERT Aprilah, Thania; Setiadi, De Rosal Ignatius Moses; Herowati, Wise
Journal of Applied Informatics and Computing Vol. 9 No. 6 (2025): December 2025
Publisher : Politeknik Negeri Batam

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30871/jaic.v9i6.11409

Abstract

Aspect-Based Sentiment Analysis (ABSA) on hotel reviews faces significant challenges regarding semantic complexity and severe class imbalance, particularly in low-resource languages like Indonesian. This study evaluates the effectiveness of fine-tuning IndoBERT, a pre-trained Transformer model, to address these issues by benchmarking it against classical statistical methods (TF-IDF) and static embeddings (Sentence-BERT). Utilizing the HoASA dataset, the experiment implements a Random Oversampling strategy at the text level to mitigate data sparsity in minority classes. Empirical results demonstrate that the fine-tuned IndoBERT significantly outperforms baselines on the majority of aspects, achieving a global accuracy of 97% and macro F1-score of 0.92. Granular per-aspect analysis reveals that the model’s self-attention mechanism captures linguistic context robustly in tangible aspects (e.g., wifi, service), yet faces persistent challenges in highly ambiguous aspects such as smell (bau) and general. Statistical significance tests (Paired t-test and Wilcoxon) confirm that the performance gains over baselines are statistically significant (p < 0.05) and not due to random chance. The study concludes that leveraging contextual representations from IndoBERT, combined with data balancing strategies, offers a superior and statistically robust solution for handling linguistic variations and class bias in the Indonesian hospitality domain.
Public Sentiment Analysis of the Free Nutritious Meals Program (MBG) on Social Media X Using the Naive Bayes Method Aprianti, Ni Nyoman; Desmayani, Ni Made Mila Rosa; Libraeni, Luh Gede Bevi; Indrawan, I Gusti Agung; Radhitya, Made Leo
Journal of Applied Informatics and Computing Vol. 9 No. 6 (2025): December 2025
Publisher : Politeknik Negeri Batam

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30871/jaic.v9i6.11420

Abstract

This study aims to analyze public sentiment towards the Free Nutritious Meals Program (MBG) launched by the government, utilizing data from the X (Twitter) platform using the Naïve Bayes method. The background of this study is based on the high level of public attention towards the MBG program, which targets school children, toddlers, pregnant women, and nursing mothers, as well as the prevalence of diverse opinions on social media. Data was collected through a crawling process during the period of April 28 to May 28, 2025, using keywords related to MBG, resulting in 12,310 tweets. The data processing stages included text preprocessing (cleansing, case folding, tokenizing, filtering, stemming), word weighting with TF-IDF, training and test data division, and testing using a confusion matrix. The results show that the Naïve Bayes method is capable of classifying sentiment into three categories: positive, negative, and neutral, with optimal performance on an 80:20 data split, resulting in an accuracy of 86.78%, precision of 86.86%, recall of 86.78%, and an F1-score of 86.58%. The majority of public sentiment towards the MBG program was positive, reflecting support for the program's benefits in improving the nutrition of school children and alleviating the economic burden on families. This study is expected to serve as a reference for the government in evaluating public policy and communication strategies, as well as contributing academically to the development of text mining and sentiment analysis studies on social media.
The Influence of Knowledge Management and Digital Competence on Employee Performance: Mediating Role of Innovative Behavior Sabila, Amalia; Afrina, Mira; Tania, Ken Ditha
Journal of Applied Informatics and Computing Vol. 9 No. 6 (2025): December 2025
Publisher : Politeknik Negeri Batam

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30871/jaic.v9i6.11529

Abstract

Rapid technological changes in the era of Industry 4.0 and 5.0 have made digital knowledge and skills more important in improving the way employees perform their tasks. Earlier research has given mixed results. This shows there is still a lot to learn. Based on the KBV (Knowledge Based-View) theory, this study looks at how knowledge management and digital competence directly and indirectly affect employee performance through innovative work behavior. Data were obtained using a questionnaire that had been compiled and analyzed with Partial Least Squares-Structural Equation Modeling (PLS-SEM) method with SmartPLS 4.1.1.4. The research sample included all employees in the case study (N = 56), with census sampling method. The study found that KM had a significant impact on IWB (p < 0,05), but did not have a significant direct impact on EP (p > 0,05). DC had a significant impact on EP (p < 0,05), but did not have a significant impact on IWB (p > 0,05). IWB played an important role in improving EP and also mediated the relationship between KM and EP. Theoretically, this study adds value to both the KBV theory by explaining how KM boosts performance through indirect ways, and by showing that digital capital plays a limited role in improving performance. Practically, the findings offer actionable implications for HR practitioners in designing performance systems that reward innovative behaviour, thereby motivating employees to utilize knowledge and digital tools more creatively to enhance productivity and service quality in medium enterprises.
Enhancing the Predictive Accuracy of Corrosion Inhibition Efficiency Using Gradient Boosting with Feature Engineering and Gaussian Mixture Model Amri, Sahrul; Akrom, Muhamad; Trisnapradika, Gustina Alfa
Journal of Applied Informatics and Computing Vol. 9 No. 6 (2025): December 2025
Publisher : Politeknik Negeri Batam

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30871/jaic.v9i6.11560

Abstract

Prediction The development of Quantitative structure property relationship (QSPR) models for predicting corrosion inhibition efficiency (IE) often faces challenges due to small datasets, which heightens the risk of overfitting and results in less reliable performance assessments. This research creates an entirely leakage-free modeling framework by combining per-fold preprocessing, augmentation of training-only data, and rigorous Leave-One-Out Cross-Validation (LOOCV). A set of 20 pyridazine derivatives was evaluated using 12 quantum-chemical descriptors, including HOMO, LUMO, ΔE, dipole moment, electronegativity, hardness, softness, and the electron-transfer fraction. An initial assessment showed that all baseline models lacking augmentation Gradient Boosting, Random Forest, SVR, and XGBoost demonstrated limited predictive power (R² < 0.20), revealing the dataset's inherently low information complexity.To enhance representation in the feature space, a multi-scale Gaussian Mixture Model (GMM) was used to generate chemically valid synthetic samples, with all components trained solely on the training subset from each LOOCV fold. This strategy consistently improved model performance. The two most successful configurations, XGBoost + GMM v2 and Random Forest + GMM v3, reached R² values of 0.4457 and 0.4108, respectively, along with significant decreases in RMSE, MAE, and MAPE. These findings illustrate that GMM-based generative augmentation effectively captures multicluster structures within the descriptor space while expanding the chemical variability domain in a controlled way.While the resulting R² values remain inadequate for high-precision quantitative predictions, the proposed methodology provides a solid basis for early-stage evaluation of corrosion inhibitors in situations with limited data. Future research will aim to integrate advanced DFT-derived descriptors, molecular graph representations, and tests against larger external datasets to enhance model generalizability.
Bi-LSTM with Explainable AI for Session Duration-Based Customer Lifetime Value Proxy on Multi-Category E-Commerce Platforms Nartriani, Yulian Dwi; Setiadi, De Rosal Ignatius Moses
Journal of Applied Informatics and Computing Vol. 9 No. 6 (2025): December 2025
Publisher : Politeknik Negeri Batam

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30871/jaic.v9i6.11578

Abstract

The rapid growth of multi-category e-commerce platforms has increased the importance of behavioral data for predicting Customer Lifetime Value (CLV). However, monetary-based CLV estimation is often infeasible due to incomplete or unavailable transaction records. This study adopts session duration as a short-term behavioral proxy for CLV and proposes a Bidirectional Long Short-Term Memory (Bi-LSTM) model enhanced with a Temporal Attention mechanism to improve predictive accuracy. The publicly available REES46 dataset, consisting of 1,6 million events and 276.000 unique sessions, is used with preprocessing steps including label encoding, temporal feature construction, and outlier-aware sampling to address the highly right-skewed distribution of session durations. Four baseline models Decision Tree, Random Forest, Extreme Gradient Boosting (XGBoost), and conventional Long Short-Term Memory (LSTM) are implemented for comparative evaluation. The baseline LSTM achieves MAE = 0,0080 and RMSE = 0,0322. The proposed Bi-LSTM v3 model, equipped with Temporal Attention and structured sampling, demonstrates substantial performance improvement, achieving MAE = 0,0043 (≈368 seconds) and RMSE = 0,0172 (≈1466 seconds), representing an accuracy gain of approximately 45–50% over the baseline. Explainability analysis using SHapley Additive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME) confirms that the time_diff feature is the dominant contributor at both global and local levels, aligning with the behavior of the attention mechanism. Additionally, the integration of Explainable Artificial Intelligence (XAI) provides transparent insights into model decision patterns. These findings show that combining Bi-LSTM, Temporal Attention, and XAI yields an accurate and interpretable framework for session duration prediction, supporting the use of session duration as a feasible CLV proxy in multi-category e-commerce environments.
Performance Analysis of YOLO, Faster R-CNN, and DETR for Automated Personal Protective Equipment Detection Naufaldihanif, Rihan; Kurniawan, Dedy; Tania, Ken Ditha
Journal of Applied Informatics and Computing Vol. 9 No. 6 (2025): December 2025
Publisher : Politeknik Negeri Batam

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30871/jaic.v9i6.11593

Abstract

Automated monitoring of Personal Protective Equipment (PPE) is crucial for enhancing safety in high-risk environments like construction sites, yet selecting the optimal detection model requires careful evaluation of accuracy versus efficiency trade-offs. This study presents a comparative performance analysis across distinct object detection paradigms represented by YOLO (YOLOv8, YOLOv11n), Faster R-CNN, and DETR to benchmark their suitability for real-time PPE detection. However, this study moves beyond a simple technical benchmark by also proposing a logical process to transform raw model detections (e.g., 'person', 'hardhat') into actionable compliance verification information (e.g., 'Compliant'/'Non-Compliant'). Using a curated construction site safety dataset, models were evaluated based on standard accuracy metrics (including mAP@.5:.95) and efficiency measures (inference latency). Results indicate that DETR and YOLOv11n achieved the highest overall accuracy with an identical mAP@.5:.95 of 0.770, closely followed by YOLOv8 (0.763), while the YOLO family demonstrated significantly superior real-time efficiency (6-7 ms latency). Faster R-CNN recorded a lower mAP (0.703) and the highest latency. Conclusively, YOLOv11n offers the most compelling balance for the detection phase, and the proposed logical process provides a practical method for integrating this technical output into automated safety monitoring systems.
Security Evaluation of Keycloak-Based Role-Based Access Control in Microservice Architectures Using the OWASP ASVS Framework Gamayanto, Indra; Christ Kurniawan , Michael; Klavin Sanyoto , Gabriello
Journal of Applied Informatics and Computing Vol. 9 No. 6 (2025): December 2025
Publisher : Politeknik Negeri Batam

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30871/jaic.v9i6.11604

Abstract

The Rocket Car Wash Semarang application operates using a microservice architecture that handles sensitive information such as user identity data, transaction history, and vehicle details. As multiple services interact through authenticated API calls, strong access control is required to protect the system from unauthorized access and privilege escalation. This research evaluates the Keycloak-based Role-Based Access Control (RBAC) implementation by referencing relevant domains of the OWASP Application Security Verification Standard (ASVS) Level 2, specifically V2: Authentication, V3: Session Management, V4: Access Control, and V14: Configuration. The RBAC structure consists of three primary roles—Admin, Owner, and Customer—and the assessment examines the correctness of role–permission mapping and token-based authorization across microservices. The security evaluation was conducted through configuration auditing, API endpoint verification using Postman, JWT token validation, and automated penetration testing using OWASP Zed Attack Proxy (ZAP). The ZAP scan targeted common web vulnerabilities, particularly misconfigurations and weaknesses in HTTP security headers. The results indicate that Keycloak effectively enforces centralized authentication and authorization, with no critical issues such as Broken Access Control identified. However, several non-critical weaknesses were found, including incomplete Content Security Policy (CSP) directives and missing HSTS headers. These findings show that the RBAC implementation meets core ASVS Level 2 controls, while further improvements in security header configuration are required to enhance overall system resilience.