cover
Contact Name
Jumanto
Contact Email
jumanto@mail.unnes.ac.id
Phone
+628164243462
Journal Mail Official
sji@mail.unnes.ac.id
Editorial Address
Ruang 114 Gedung D2 Lamtai 1, Jurusan Ilmu Komputer Universitas Negeri Semarang, Indonesia
Location
Kota semarang,
Jawa tengah
INDONESIA
Scientific Journal of Informatics
ISSN : 24077658     EISSN : 24600040     DOI : https://doi.org/10.15294/sji.vxxix.xxxx
Scientific Journal of Informatics (p-ISSN 2407-7658 | e-ISSN 2460-0040) published by the Department of Computer Science, Universitas Negeri Semarang, a scientific journal of Information Systems and Information Technology which includes scholarly writings on pure research and applied research in the field of information systems and information technology as well as a review-general review of the development of the theory, methods, and related applied sciences. The SJI publishes 4 issues in a calendar year (February, May, August, November).
Articles 131 Documents
Freshwater Filling Optimization Based on Price Using XGBoost and Particle Swarm Optimization on Cargo Ship Voyage Yulianto, Ilham; Fauzi, Muhammad Dzulfikar; Safitri, Pima Hani
Scientific Journal of Informatics Vol. 12 No. 2: May 2025
Publisher : Universitas Negeri Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15294/sji.v12i2.24988

Abstract

Purpose: Efficient freshwater management is critical in cargo ship operations, yet current practices often involve fixed refilling strategies that ignore price differences across ports and fail to predict actual consumption accurately. These inefficiencies lead to unnecessary operational costs. To address this, the study introduces a combined approach using XGBoost for predict freshwater usage and Particle Swarm Optimization (PSO) to minimize refilling costs through optimal port selection. Methods: Freshwater demand was predicted using an XGBoost regression model trained on real operational data from 2024, which included historical voyage distances and freshwater consumption records from cargo ships. Based on these predictions, Particle Swarm Optimization (PSO) was applied to identify cost-efficient refilling locations along each ship’s route, minimizing total water procurement cost while satisfying operational constraints. The proposed framework was validated through simulated voyage scenarios to evaluate its impact on cost efficiency and planning effectiveness. Result: The integration of XGBoost and PSO effectively optimized freshwater refilling strategies, achieving a relative prediction error of 9.48% in freshwater consumption prediction and cost savings from 9 to 40% from across 3 ships sample through strategic port selection based on consumption patterns and price variability. Novelty: Unlike prior works focused on fuel or generic logistics optimization, aim of this study is to combine XGBoost and PSO for optimizing freshwater refilling on cargo ship voyages using actual operational data. The results demonstrate practical, scalable improvements in cost efficiency, making a novel contribution to maritime resource planning.
The Empirical Best Linear Unbiased Prediction and The Emperical Best Predictor Unit-Level Approaches in Estimating Per Capita Expenditure at the Subdistrict Level Fauziah, Ghina; Kurnia, Anang; Djuraidah, Anik
Scientific Journal of Informatics Vol. 12 No. 2: May 2025
Publisher : Universitas Negeri Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15294/sji.v12i2.25037

Abstract

Purpose: This study aims to estimate and evaluate per capita expenditure at the subdistrict level in Garut Regency by employing unit-level Small Area Estimation (SAE) techniques, specifically utilizing the Empirical Best Linear Unbiased Predictor (EBLUP) and the Empirical Best Predictor (EBP) methods. Methods: The data used in this study are socio-economic data, specifically per capita household expenditure in Garut Regency. Socio-economic data generally skew positively rather than the normal distribution, so a method that can approximate or come close to the normal distribution is needed, for example, log-normal transformation. To improve the performance of EBLUP, which may lead to inefficient estimators because of violation of the assumption of normality, this study proposes the Empirical Best Predictor (EBP) method. It handles positively skewed data by applying log-normal transformation to sample data so that it more closely conforms to the desired distribution. Result: The EBP results are more stable than EBLUP since EBLUP is highly sensitive to outliers, and in cases where the normality assumption is violated, it produces a significant mean square error and inefficient estimators. Evaluating the estimates with both EBLUP and EBP shows Relative Root Mean Squared Error (RRMSE) values above 25%, especially in the subdistricts of Pamulihan, Sukaresmi, and Kersamanah. This is probably due to the household samples being taken in these three subdistricts being comparatively small compared to the other. Novelty: In this research, we use EBP to improve the performance of EBLUP, which produces inefficient estimators when the normality assumption is violated.
Artificial Intelligence-Based Leveling System for Determining Severity Level of Autism Spectrum Disorder Rasim, R; Munir, M; Wihardi, Yaya; Ningrayati Amali, Lanto
Scientific Journal of Informatics Vol. 12 No. 4: November 2025
Publisher : Universitas Negeri Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15294/sji.v12i4.14440

Abstract

Purpose: The aim of this research is to analyze the use of an artificial intelligence (AI)-based leveling system to determine the severity of autism spectrum disorders (ASD). Methods: The research method is a systematic literature review. This study addresses three key questions: (i) What factors are used to determine ASD severity? (ii) What algorithms or AI models are used in classifying ASD severity? (iii) What are the results of this AI-based leveling system in terms of severity levels or categories? Results: The study results identified several key factors that influence ASD severity, including age, IQ, genetic and neurological factors, co-occurring mental health conditions, and sociodemographic variables. Various AI algorithms, including machine learning and deep learning techniques, are used to classify the severity of ASD. The results of this study highlight the effectiveness of AI in providing objective, consistent, and measurable assessments of ASD severity, although challenges such as data quality and ethical considerations remain. AI-based leveling systems show significant potential in improving assessment and intervention processes for ASD. Novelty: This research systematically synthesizes studies on AI-driven ASD severity assessment, providing insights into crucial variables for AI-based evaluation tools. By analyzing the factors influencing severity and the effectiveness of AI models, this study identifies promising approaches for classification. The findings offer valuable contributions to the development of AI-based tools in clinical and educational applications. Further research is necessary to improve AI reliability, address biases, and maximize its potential in ASD assessment and intervention.
Smart Rupiah Recognition: A Mobile Machine Learning Approach for Visually Impaired Users Fadlurrahman, Hanan Nadhif; Affandy, Affandy; Cahyadi, Dede Faiz
Scientific Journal of Informatics Vol. 12 No. 4: November 2025
Publisher : Universitas Negeri Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15294/sji.v12i4.28930

Abstract

Purpose: Despite advances in assistive technology, low-connectivity areas lack reliable solutions for visually impaired individuals, prompting this study to enhance financial autonomy in cash-based economies. This research addresses high fraud risks and the limitations of online tools like Be My Eyes, which fail in areas with only 40% internet access, by developing a 3MB MobileNetV2 model for offline Rupiah denomination recognition on low-end Android devices. Methods: A MobileNetV2-based Convolutional Neural Network, optimized to 3MB via TensorFlow Lite quantization, was trained on 10,855 augmented images (rotation ±30°, flipping, Gaussian noise, σ=0.1). The Kotlin-based application integrates CameraX for 720p video and Bahasa Indonesia text-to-speech, with a “no object” class. The model was tested on 4–8GB RAM devices, validated through usability evaluations with diverse stakeholders. Result: The model achieves 90% accuracy (F1-score 0.90) at 1000 lux, 85% at <50 lux, 80% at >60° angles, and 88% for “no object,” with 10ms latency. Self-supervised learning (SimCLR) on 2,000 worn notes improves accuracy by 3% (p < 0.05). Usability evaluations yield 95% session success, with TTS and UI Likert scores of 4.2 and 4.0.. Novelty: The 3MB MobileNetV2 model, with 10ms latency and 15% false positive reduction, outperforms YOLOv5 (500MB, 50ms), Vision Transformer (1GB, 200ms), and YOLOv8 (200MB, 30ms). This model shows potential for cross-currency detection throught preliminary exploration (e.g., USD and euro), which may advance edge AI and financial inclusion in developing nations.
Elementary School Accreditation Assessment Using Fuzzy Tsukamoto and SMARTER Method Rahmawati, Nurhita; Nurhayati, Oky Dwi; Surarso, Bayu
Scientific Journal of Informatics Vol. 12 No. 4: November 2025
Publisher : Universitas Negeri Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15294/sji.v12i4.30729

Abstract

Purpose: The primary objective of this study is to develop and validate an Elementary School Accreditation Evaluation Model that is both measurable and fair. The proposed model integrates the Fuzzy Tsukamoto method to calculate and consistently generate the final score of each alternative, and the SMARTER method to produce a prioritized ranking that serves as a practical guide for schools in their efforts to improve and strengthen quality. Methods: This study integrates the Fuzzy Tsukamoto method to process numerical data through a rule-based inference mechanism. Simultaneously, the SMARTER method is employed to systematically assign weights to each criterion and sub-criterion using the Rank Order Centroid (ROC) approach. The evaluation is carried out on 16 alternatives based on four main criteria. The research data are derived from the IASP 2020 instrument issued by BAN-S/M, which serves as the official accreditation standard for schools and madrasahs in Indonesia. Result: The developed structured assessment model proved effective. Through ROC weighting, Criterion K1 was identified as the main determining factor (0.611). System validation using Fuzzy Logic showed a high level of consistency (87.5% agreement) with the manual assessor's decisions, confirming the model's accuracy in replicating assessments based on data triangulation. The SMARTER ranking provides targeted recommendations, placing Alternatives A13, A2, A7, and A8 as standards to be maintained, while pointing to A3 as the priority for immediate improvement. Novelty: This study offers a novel approach by integrating the Fuzzy Tsukamoto and SMARTER methods within the context of primary school accreditation a combination that has been rarely explored in previous research. The proposed model not only generates evaluation scores but also produces a ranking system that can serve as a reference for school evaluation.
UTAUT-2, HOT-Fit, and PLS-SEM for User Acceptance and Success of the Face Recognition Feature in CAT BKN Application Sari, Juwita Dwinda; Warsito, Budi; Wibowo, Catur Edi
Scientific Journal of Informatics Vol. 12 No. 4: November 2025
Publisher : Universitas Negeri Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15294/sji.v12i4.31229

Abstract

Purpose: Face recognition feature was implemented in the National Civil Service Agency's Computer-Assisted Test in 2021. There has been no evaluation of the system's acceptance and success. This study aims to measure user acceptance and evaluate the feature's success using the R Shiny application. Methods: The study utilized 337 respondents from a Google Form-based questionnaire distributed throughout the Regional Office VII of the National Civil Service Agency in Palembang. The hybrid model used was UTAUT-2 and HOT-Fit, with PLS-SEM statistical analysis. Acceptance analysis and feature evaluation were conducted using the developed R Shiny Dashboard. Results: The findings indicated that 15 of the 26 hypotheses were accepted. Behavioral intention and use behavior significantly influence hedonic motivation and habit. User behavior significantly influences user satisfaction, system quality, service quality, information quality, system use, and organizational structure and environment. As users become more familiar with the technology, their experience improves, and system utilization becomes more effective. Novelty: The integration of UTAUT-2 and HOT-Fit models within an R Shiny Dashboard was applied to analyze user acceptance and evaluate the face recognition feature in Computer Computer-Assisted Test selection process. The findings provide recommendations for feature development and improving participant face recognition performance. Moreover, the R Shiny Dashboard can be adapted for user experience analysis and system evaluation in other contexts.
A Hybrid Approach of Aspect-Based Sentiment Analysis and Knowledge Extraction for Evaluating Security Perceptions in Digital Payment Applications Fatihaturrahmah, Aisyah; Ditha Tania, Ken
Scientific Journal of Informatics Vol. 12 No. 4: November 2025
Publisher : Universitas Negeri Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15294/sji.v12i4.31557

Abstract

Purpose: The rapid expansion of digital wallets in Indonesia has heightened concerns regarding user security and trust. This study evaluates user sentiment toward the security features of the DANA digital payment application using Aspect Sentiment Classification (ASC), a subtask of Aspect-Based Sentiment Analysis (ABSA). It aims to compare multiple classification models and generate structured, machine-readable sentiment outputs to support knowledge extraction and system integration. Methods: A total of 4,846 security-related reviews were collected from the Google Play Store using keyword-based filtering, supplemented by 3,000 unfiltered reviews for robustness evaluation. Sentiment labeling was performed using a hybrid rule-based and manual annotation approach. From 300 proportionally sampled reviews (150 positive and 150 negative), the validation achieved 0.8504 accuracy and a Cohen’s κ of 0.951, indicating near-perfect agreement. Five models—Support Vector Machine (SVM), Random Forest (RF), Convolutional Neural Network (CNN), Bidirectional Long Short-Term Memory (BiLSTM), and IndoBERT—were evaluated using 5-fold stratified cross-validation with random oversampling to address class imbalance. Results: IndoBERT achieved the highest performance with 98% accuracy, an F1-score of 0.974, and an AUC-ROC of 0.996, followed by CNN and BiLSTM. Robustness testing across temporal (DANA June–October) and cross-domain (GoPay) datasets confirmed IndoBERT’s strong generalization with minimal F1-score variation. Novelty: Unlike previous ABSA studies that addressed multiple aspects, this research focuses exclusively on the security aspect, providing fine-grained insights into user trust. The integration of XML-based structured output enhances interpretability and interoperability in digital financial sentiment analysis, contributing to the development of more secure and transparent fintech ecosystems.
Impact of Feature Engineering on XGBoost Model for Forecasting Cayenne Pepper Prices Pardede, Jasman; Putri Setyaningrum, Anisa; Ilyas Al-Fadhlih, Muhammad
Scientific Journal of Informatics Vol. 12 No. 4: November 2025
Publisher : Universitas Negeri Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15294/sji.v12i4.32157

Abstract

Purpose: Cayenne pepper represents one of Indonesia’s key horticultural commodities, widely utilized in both household culinary practices and the food processing industry. Nevertheless, its market price is subject to considerable volatility, driven by factors such as weather variability, limited supply, production costs, and inefficiencies in distribution systems. This price instability generates uncertainty that adversely impacts farmers, traders, and consumers. Consequently, the development of a reliable price forecasting model is crucial to facilitate price stabilization and enable data-driven decision-making across the supply chain. This study aims to investigate the extent to which feature engineering techniques can enhance the predictive performance of the Extreme Gradient Boosting (XGBoost) algorithm in forecasting cayenne pepper prices. Through the integration of lag features, moving averages, and seasonal indicators, the proposed model is expected to more effectively capture market dynamics and provide a robust analytical tool for relevant stakeholders. Methods: The forecasting model was constructed using the XGBoost algorithm in combination with various feature engineering methods. The dataset consists of daily price records obtained from Bank Indonesia’s PIHPS system and meteorological variables sourced from BMKG, encompassing the period between 2021 and 2024. The engineered features include lag variables identified through Autocorrelation Function (ACF) and Partial Autocorrelation Function (PACF) analyses, Simple Moving Averages (SMA), seasonal indicators, and holiday-related variables designed to capture recurring patterns and event-driven price fluctuations. To enhance predictive performance, hyperparameter tuning was conducted using a grid search optimization approach. Result: The optimal model demonstrated substantial performance improvements under the following hyperparameter configuration: alpha = 0, gamma = 0.3, lambda = 1, learning_rate = 0.05, max_depth = 3, min_child_weight = 3, n_estimators = 200, and subsample = 0.6. The application of feature engineering markedly enhanced the model’s predictive capability, increasing the R² value by 99.10% while reducing the MAE, RMSE, and MAPE by 72.63%, 71.31%, and 72.04%, respectively. These outcomes signify a notable reduction in forecasting errors and demonstrate the model’s improved accuracy. Novelty: This study integrates multi-level price data with weather and holiday-related features, employing the ACF and the PACF analyses to determine optimal lag values (techniques commonly utilized in statistical modeling). This integration enhances both the accuracy and interpretability of the XGBoost algorithm, thereby providing a practical and effective tool for agricultural price forecasting and market planning.
Performance of SARIMA, LSTM, GRU and Ensemble Methods for Forecasting Nickel Prices Irdayanti; Notodiputro, Khairil Anwar; Oktarina, Sachnaz Desta
Scientific Journal of Informatics Vol. 12 No. 4: November 2025
Publisher : Universitas Negeri Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15294/sji.v12i4.32225

Abstract

Purpose: There are several forecasting methods, including SARIMA, LSTM, and GRU, which are often claimed to exhibit strong performance in capturing patterns in time series data. However, few studies have conducted direct comparisons among these methods. Therefore, it is necessary to conduct a performance evaluation using empirical data, particularly nickel prices data. This study also aims to improve forecasting performance by combining prediction outputs from deep learning-based models. Methods: This study utilized data on monthly global nickel prices from January 1990 to May 2025. The models developed include SARIMA, LSTM, GRU, and two ensemble approaches: Weighted Averaging and Bayesian Model Averaging (BMA). Model validation was conducted using walk-forward validation with a sliding window approach to evaluate each model’s generalization performance on out-of-sample validation data. The performance was evaluated using MAPE, RMSE, and MAE. Result: The BMA Ensemble approach shows the best performance in forecasting nickel prices, with a MAPE value of 5.39%, RMSE of 1897.84, and MAE of 1133.96. Prediction validation produces MAPE values below 10%, which indicates that the forecasting results are accurate. The ensemble BMA approach is able to produce more accurate and stable predictions compared to other models. Novelty: This study offers a novel approach combining LSTM and GRU through ensemble methods to forecast global nickel prices using monthly historical data from 1990 to 2025. In contrast to previous studies that relied on single models, the proposed method with the ensemble BMA approach demonstrates improved forecasting accuracy and stability.
Which Features Matter Most? Evaluating Numerical and Textual Features for Helpfulness Classification in Imbalance Dataset using XGBoost Kirani, Anindita Putri; Saptono, Ristu; Anggrainingsih, Rini
Scientific Journal of Informatics Vol. 12 No. 4: November 2025
Publisher : Universitas Negeri Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15294/sji.v12i4.33443

Abstract

Purpose: This study aims to develop and realistically evaluate a reliable model for identifying helpful online reviews, particularly in the context of Indonesian-language texts, which are often informal and challenging. Methods: This study addresses several key challenges in predicting review helpfulness: the relative effectiveness of numerical features from metadata compared with traditional text representations (TF-IDF, FastText) on noisy data; the impact of severe class imbalance; and the limitations of standard validation compared with time-based validation. To address these challenges, we built an XGBoost model and evaluated various feature combinations. A hybrid approach combining SMOTE and scale_pos_weight was applied to handle class imbalance, and the best configuration was further assessed using time-based validation to better simulate real-world conditions. Result: The results show that the model based on numerical features consistently outperformed the text-based model, achieving a peak macro F1-score of 0.7214. Compared to the IndoBERT baseline (F1-score = 0.6400) and the RCNN FastText baseline (F1-score = 0.5317), this indicates that simpler feature-driven models can provide more reliable predictions under noisy review data. Time-based validation further revealed a performance decline of up to 8.06%, confirming the presence of concept drift and highlighting that standard validation tends to yield overly optimistic estimates. Novelty: The main contribution of this research lies in offering a robust methodology while demonstrating the superiority of metadata-based approaches in this context. By quantifying performance degradation through temporal validation, this study provides a more realistic benchmark for real-world applications and highlights the critical importance of regular model retraining.