cover
Contact Name
Yuhefizar
Contact Email
jurnal.resti@gmail.com
Phone
+628126777956
Journal Mail Official
ephi.lintau@gmail.com
Editorial Address
Politeknik Negeri Padang, Kampus Limau Manis, Padang, Indonesia.
Location
,
INDONESIA
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi)
ISSN : 25800760     EISSN : 25800760     DOI : https://doi.org/10.29207/resti.v2i3.606
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) dimaksudkan sebagai media kajian ilmiah hasil penelitian, pemikiran dan kajian analisis-kritis mengenai penelitian Rekayasa Sistem, Teknik Informatika/Teknologi Informasi, Manajemen Informatika dan Sistem Informasi. Sebagai bagian dari semangat menyebarluaskan ilmu pengetahuan hasil dari penelitian dan pemikiran untuk pengabdian pada Masyarakat luas dan sebagai sumber referensi akademisi di bidang Teknologi dan Informasi. Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) menerima artikel ilmiah dengan lingkup penelitian pada: Rekayasa Perangkat Lunak Rekayasa Perangkat Keras Keamanan Informasi Rekayasa Sistem Sistem Pakar Sistem Penunjang Keputusan Data Mining Sistem Kecerdasan Buatan/Artificial Intelligent System Jaringan Komputer Teknik Komputer Pengolahan Citra Algoritma Genetik Sistem Informasi Business Intelligence and Knowledge Management Database System Big Data Internet of Things Enterprise Computing Machine Learning Topik kajian lainnya yang relevan
Articles 25 Documents
Search results for , issue "Vol 9 No 2 (2025): April 2025" : 25 Documents clear
Implementation of Generative Language Models (GLM) in Cyber Exercise Secure Coding using Prompt Engineering Sidabutar, Jeckson; Osdie, Alfido
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 9 No 2 (2025): April 2025
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29207/resti.v9i2.6012

Abstract

With the advancement of technology, the need for secure software is becoming increasingly urgent due to the rise in vulnerabilities in applications. In 2022, the National Cyber and Encryption Agency (BSSN) recorded 2,348 cases of web defacement, with one of the main causes being the lack of attention to secure coding practices during software development. This study explores the utilization of Generative Language Models (GLMs), such as ChatGPT, in secure coding training to enhance developers' skills. GLMs were implemented in a cybersecurity platform designed specifically for secure coding training, also serving as learning assistants that users can interact with during the cyber exercise. The study results show that the cyber exercise using GLMs significantly improved users' secure coding skills, as evidenced by comparing pre-test and post-test scores, indicating an increase in knowledge and proficiency in secure coding practices.
Large Language Model-Based Extraction of Logic Rules from Technical Standards for Automatic Compliance Checking Nugroho, Rizky; Krisnadhi, Adila; Saptawijaya, Ari
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 9 No 2 (2025): April 2025
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29207/resti.v9i2.6285

Abstract

In this research, we design logic rules as a representation of technical standards documents related to ship design, which will be used in automatic compliance checking. We present a novel design of logic rules based on a general pattern of technical standards’ clauses that can be produced automatically from text using a large language model (LLM). We also present a method to extract said logic rules from text. First, we design data structures to represent the technical standards and logic rules used to process the data. Second, the representation of technical standards is produced manually and tested to ensure that it can give the same conclusion as human judgment regarding compliance. Third, a variation of prompting methods, namely pipeline method and few-shot prompting, is given to LLM to instruct it to extract logic rules from text following the design. Evaluation against the logic rules produced shows that the pipeline method gives an accuracy score of 0.57, a precision of 0.49, and a recall of 0.62. On the other hand, logic rules extracted using few-shot prompting have an accuracy score of 0.33, precision of 0.43, and recall of 0.5. These results show that LLM is able to extract a logic rule representation of technical standards. Furthermore, the representation resulting from the prompting technique that utilizes the pipeline method has a better performance compared to the representation resulting from few-shot prompting.
Enhancing Problem-Solving Reliability with Expert Systems and Krulik-Rudnick Indicators Sari, Lita; Jufriadif Ma'am; Addini Yusmar; Khairiyah Khadijah; Sri Wahyuni; Naufal Ibnu Salam
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 9 No 2 (2025): April 2025
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29207/resti.v9i2.6333

Abstract

Problem-solving is one of the skills needed in the 21st century, but there is a significant gap between the ideal conditions and the reality of students' problem-solving skills. One method that can improve students' problem-solving skills is Krulik and Rudnick, but implementing this method with an expert system to improve problem-solving skills is still limited. This research aims to build an expert system to determine the level of problem-solving using Krulik and Rudnick's problem-solving indicators processed using the forward chaining and certainty factor algorithms. The study had five stages: data analysis, rule generation, certainty measurement, prediction, and testing. The data was processed by developing 5 Krulik and Rudnick problem-solving indicators into 35 statements. Each statement was categorized using Forward Chaining by producing three rules: low, medium, and high. The problem-solving level obtained is calculated using the Certainty Factor for a confidence value. The system's prediction results were evaluated using a confusion matrix, resulting in an accuracy of 80%, a precision of 92%, and a recall of 85%, indicating the system's reliable performance in measuring the level of problem-solving. This research can be used as a reference to support problem-solving in various more advanced educational and professional environments.
Measuring Factors of Trust in the Use of E-Government: A Multi-Factor Analysis of the E-Government in Indonesia Altino, Iqbal Caraka; Sudarto, Reska Nugroho; Sensuse, Dana Indra; Lusa, Sofian; Putro , Prasetyo Adi Wibowo; Indriasari , Sofiyanti; Brillianto, Bramanti
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 9 No 2 (2025): April 2025
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29207/resti.v9i2.6016

Abstract

The implementation of dynamic records management applications within the Indonesian government remains relatively limited, with a lack of comprehensive integration between authorised institutions at both the central and regional levels. This research examines the impact of technical aspects, government agency variables, citizen variables, and risk indicators on trust in e-government. Furthermore, this study seeks to establish the effect of social factors and the advantages of trust in e-government. Finally, this research shows how trust in e-government influences satisfaction, willingness to use, and acceptance of e-government. The study examined 117 respondents using the integrated dynamic archival information system - SRIKANDI. Technical and risk factors were found to positively influence trust in e-government, with effects on satisfaction, intention to use, and adoption of e-government. Those who trusted SRIKANDI were more likely to utilize and implement the program. The findings indicate that for civil servants, trust in the government is also a factor influencing the utilisation of e-government services.
Hand Sign Recognition of Indonesian Sign Language System SIBI Using Inception V3 Image Embedding and Random Forest Sari, Mayang; Jamzuri, Eko Rudiawan
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 9 No 2 (2025): April 2025
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29207/resti.v9i2.6156

Abstract

This paper presents a sign language recognition system for the Indonesian Sign Language System SIBI using image embeddings combined with a Random Forest classifier. A dataset comprising 5280 images across 24 classes of SIBI alphabet symbols was utilized. Image features were extracted using the Inception V3 image embedding, and classification was performed using Random Forest algorithms. Model evaluation conducted through K-Fold cross-validation demonstrated that the proposed model achieved an accuracy of 59.00%, an F1-Score of 58.80%, a precision of 58.80%, and a recall of 59.00%. While the performance indicates room for improvement, this study lays the groundwork for enhancing sign language recognition systems to support the preservation and broader adoption of SIBI in Indonesia.
Comparison of Sugarcane Drought Stress Based on Climatology Data using Machine Learning Regression Model in East Java Aries Suharso; Yeni Herdiyeni; Suria Darma Tarigan; Yandra Arkeman
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 9 No 2 (2025): April 2025
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29207/resti.v9i2.6159

Abstract

Crop Water Stress Index (CWSI), derived from vegetation features (NDVI) and canopy thermal temperature (LST), is an effective method to evaluate sugarcane sensitivity to drought using satellite data. However, obtaining CWSI values ​​is complicated. This study introduces a novel approach to estimate CWSI using climatological data, including average air temperature, humidity, rainfall, sunshine duration, and wind speed features obtained from the local weather station BMKG Malang City, East Java, for the period 2021-2023. Before estimating CWSI, we analyzed sugarcane water stress phenology, examined the strength of the correlation between climatological features and CWSI, and looked at the potential for adding lag features. Our proposed prediction model uses climatological features with additional Lag features in a machine learning regression approach and 5-fold cross-validation of the training-testing data split with the help of optimization using hyperparameters. Different machine learning regression models are implemented and compared. The evaluation results showed that the prediction performance of the SVR model achieved the best accuracy with R2 = 90.45% and MAPE = 9.55%, which outperformed other models. These findings indicate that climatological features with lag effects can effectively predict water stress conditions in rainfed sugarcane if using an appropriate prediction model. The main contribution of this study is the utilization of local climatological data, which is easier to obtain and collect than sophisticated satellite data, to estimate CWSI. The application of the results shows that climatological data with lag effects can accurately estimate water stress conditions in rainfed sugarcane. In drought-prone areas, this strategy can help sugarcane farmers make better choices about land management and irrigation.
Deep learning with Bayesian Hyperparameter Optimization for Precise Electrocardiogram Signals Delineation Darmawahyuni, Annisa; Sari, Winda Kurnia; Afifah, Nurul; Siti Nurmaini; Jordan Marcelino; Rendy Isdwanta
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 9 No 2 (2025): April 2025
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29207/resti.v9i2.6171

Abstract

Electrocardiography (ECG) serves as an essential risk-stratification tool to observe further treatment for cardiac abnormalities. The cardiac abnormalities are indicated by the intervals and amplitude locations in the ECG waveform. ECG delineation plays a crucial role in identifying the critical points necessary for observing cardiac abnormalities based on the characteristics and features of the waveform. In this study, we propose a deep learning approach combined with Bayesian Hyperparameter Optimization (BHO) for hyperparameter tuning to delineate the ECG signal. BHO is an optimization method utilized to determine the optimal values of an objective function. BHO allows for efficient and faster parameter search compared to conventional tuning methods, such as grid search. This method focuses on the most promising search areas in the parameter space, iteratively builds a probability model of the objective function, and then uses that model to select new points to test. The used hyperparameters of BHO contain learning rate, batch size, epoch, and total of long short-term memory layers. The study resulted in the development of 40 models, with the best model achieving a 99.285 accuracy, 94.5% sensitivity, 99.6% specificity, and 94.05% precision. The ECG delineation-based deep learning with BHO shows its excellence for localization and position of the onset, peak, and offset of ECG waveforms. The proposed model can be applied in medical applications for ECG delineation.
Enhanced Heart Disease Diagnosis Using Machine Learning Algorithms: A Comparison of Feature Selection Hirmayanti; Ema Utami
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 9 No 2 (2025): April 2025
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29207/resti.v9i2.6175

Abstract

Heart disease or cardiovascular disease is one of the leading causes of death in the world. Based on WHO data, in 2019, as many as 17.9 million people died from cardiovascular disease. If early prevention is not carried out immediately, of course, the victims will increase every year. Therefore, with the increasingly rapid development of technology, especially in the health sector, it is hoped that it can help medical personnel in treating patients suffering from various diseases, especially heart disease. So in this study, it will be more focused on the selection of relevant features or attributes to increase the accuracy value of the Machine Learning algorithm. The algorithms used include Random Forest and SVM. Meanwhile, for feature selection, several feature selection techniques are used, including information gain (IG), Chi-square (Chi2) and correlation feature selection (CFS). The use of these three techniques aims to obtain the main features so that they can minimize irrelevant features that can slow down the machine process. Based on the results of the experiment with a comparison of 70:30, it shows that CFS-SVM is superior by using nine features, which obtain the highest accuracy of 92.19%, while CFS-RF obtains the best value with eight features of 91.88%. By using feature selection and hyperparameter techniques, SVM obtained an increase of 10.88%, and RF obtained an increase of 9.47%. Based on the performance of the model using the selected relevant features, it shows that the proposed CFS-SVM shows good and efficient performance in diagnosing heart disease.
Application of Formal Concept Analysis and Clustering Algorithms to Analyze Customer Segments Budaya, I Gede Bintang Arya; Dharmendra, I Komang; Triandini, Evi
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 9 No 2 (2025): April 2025
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29207/resti.v9i2.6184

Abstract

Business development cannot be separated from relationships with customers. Understanding customer characteristics is important both for maintaining sales and even for targeting new customers with appropriate strategies. The complexity of customer data makes manual analysis of the customer segments difficult, so applying machine learning to segment the customer can be the solution. This research implements K-Means and GMM algorithms for performing clustering based on the Transaction data transformed to the Recency, Frequency, and Monetary (RFM) data model, then implements Formal Concept Analysis (FCA) as an approach to analyzing the customer segment after the class labeling. Both K-Means and GMM algorithms recommended the optimal number of clusters as the customer segment is four. The FCA implementation in this study further analyzes customer segment characteristics by constructing a concept lattice that categorizes segments using combinations of High and Low values across the RFM attributes based on the median values, which are High Recency (HR), Low Recency (LR), High Frequency (HF), Low Frequency (LF), High Monetary (HM), and Low Monetary (LM). This characteristic can determine the customer category; for example, a customer that has HM and HR can be considered a loyal customer and can be the target for a specific marketing program. Overall, this study demonstrates that using the RFM data model, combined with clustering algorithms and FCA, is a potential approach for understanding MSME customer segment behavior. However, special consideration is necessary when determining the FCA concept lattice, as it forms the foundation of the core analytical insights.
Comparative Analysis of Machine Learning Algorithms for Predicting Patient Admission in Emergency Departments Using EHR Data Chamid, Ahmad Abdul; Nindyasari, Ratih; Ghozali, Muhammad Imam
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 9 No 2 (2025): April 2025
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29207/resti.v9i2.6188

Abstract

Every patient who is rushed to the Emergency Department needs fast treatment to determine whether the patient should be inpatient or outpatient. However, the existing fact is that deciding whether an inpatient or outpatient must wait for the diagnosis made by the existing doctor, so if there are many patients, it generally takes quite a long time. So, to predict patient admissions to the emergency unit, a machine learning model that can be fast and accurate is needed. Therefore, this study developed a machine learning and neural network model to determine patient care in Emergency Departments. This study uses publicly available electronic health record (EHR) data, which is 3,309. The model development process uses machine learning methods (SVM, Decision Tree, KNN, AdaBoost, MLPClassifier) and neural networks. The model that has been obtained is then evaluated for its performance using a confusion matrix and several matrices such as accuracy, precision, recall, and F1-Score. The results of the model performance evaluation were compared, and the best model was obtained, namely the MLPClassifier model with an accuracy value = 0.736 and an F1-Score value = 0.635, and the Neural Network model obtained an accuracy value = 0.724 and an F1-Score value = 0.640. The best models obtained in this study, namely the MLPClassifier and Neural Network models, were proven to be able to outperform other models.

Page 1 of 3 | Total Record : 25