cover
Contact Name
Dr. Dian Palupi Rini
Contact Email
dprini@unsri.ac.id
Phone
-
Journal Mail Official
sjia@unsri.ac.id
Editorial Address
Fakultas Ilmu Komputer UNSRI
Location
Kab. ogan ilir,
Sumatera selatan
INDONESIA
Sriwijaya Journal of Informatics and Applications
Published by Universitas Sriwijaya
ISSN : -     EISSN : 28072391     DOI : -
Core Subject :
Sriwijaya Journal of Informatics and Applcations (SJIA) is a scientific periodical researchs articles of the Informatics Departement Universitas Sriwijaya. This Journal is an open access journal for scientists and engineers in informatics and Applcations area that provides online publication (two times a year). SJIA offers a good opportunity for academics and industry professionals to publish high quality and refereed papers in various areas of Informatics e.q., Machine Learning & Soft Computing, Data Mining & Big Data Analytics, Computer Vision and Pattern Recognition and Automated Reasoning, and Distributed and security System
Arjuna Subject : -
Articles 49 Documents
Reconstruction Low- Resolution Image Face Using Restricted Boltzmann Machine Julian Supardi
Sriwijaya Journal of Informatics and Applications Vol 4, No 1 (2023)
Publisher : Fakultas Ilmu Komputer Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.36706/sjia.v4i1.72

Abstract

Low-resolution (LR) face images are one of the most challenging problems in face recognition (FR) systems. Due to the difficulty of finding the specific features of faces, the accuracy of face recognition is low. To solve this problem, some researchers are using an image reconstruction approach to improve the resolution of their images. In this research, we are trying to use the restricted Boltzmann machine (RBM) to solve the problem. Furthermore, a labelled face in the wild (lfw) database has been used to validate the proposed method. The results of the experiment show that the PSNR and SSIM of the image result are 34.05 dB and 96.8%, respectively.
Automatic Clustering and Fuzzy Logical Relationship to Predict the Volume of Indonesia Natural Rubber Export Widya Aprilini; Dian Palupi Rini; Hadipurnawan Satria
Sriwijaya Journal of Informatics and Applications Vol 4, No 1 (2023)
Publisher : Fakultas Ilmu Komputer Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.36706/sjia.v4i1.51

Abstract

Natural rubber is one of the pillars of Indonesia's export commodities. However, over the last few years, the export value of natural rubber has decreased due to an oversupply of this commodity in the global market. To overcome this problem, it is possible to predict the volume of Indonesia natural rubber exports. Predicted values can also help the government to compile market intelligence for natural rubber commodities periodically. In this study, the prediction of the export volume of natural rubber was carried out using the Automatic Clustering as an interval maker in the Fuzzy Time Series or usually called Automatic Clustering and Fuzzy Logical Relationship (ACFLR). The data used is 51 data per year from 1970 to 2020. The purpose of this study is to predict the volume of Indonesia natural rubber exports and compare the prediction results between the Automatic Clustering and Fuzzy Logical Relationship (ACFLR) and Chen's Fuzzy Time Series. The results showed that there was a significant difference between the two methods, ACFLR got 0.5316% MAPE with  and Chen's Fuzzy Time Series model got 8.009%. Show that the ACFLR method performs better than the pure Fuzzy Time Series in predicting volume of Indonesia natural rubber exports.
Risk Management Evaluation in Hospital Management Information Systems Using Framework COBIT 2019 - Case Study: Ernaldi Bahar South Sumatera Hospital Hilditia Cici Triska Amirta; Muhammad Ihsan Jambak; Pacu Putra Suarli; Yadi Utama; Ari Wedhasmara; Putri Eka Sevtiyuni
Sriwijaya Journal of Informatics and Applications Vol 4, No 1 (2023)
Publisher : Fakultas Ilmu Komputer Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.36706/sjia.v4i1.52

Abstract

Hospital Management Information System (SIMRS) is a system to assist service performance, reporting and data retrieval at hospitals that have been required by the government to be implemented in all hospitals in Indonesia. The existence of SIMRS is certainly an inseparable part of the service process and hospital data management, but it can also cause various IT risks to arise. Therefore, a good risk management is needed to minimize any possible IT risks that have not or have occurred. The performance of an IT risk management can be indicated from its capability levels. This study aims to determine how high the capability levels and the gap value from each process of the IT risk management at Ernaldi Bahar Hospital. The framework used as a reference in the assessment of the risk management process is COBIT 2019 which has 3 stages, namely the mapping process, capability level assessment, and conclusions. This study resulted in the value of capabilities in each process in IT risk management, the gap value, and recommendations for improvement that can be applied to SIMRS Ernaldi Bahar. The results of the measurement of the IT risk management capability of SIMRS Ernaldi Bahar in the EDM03 and DSS03 processes are at level 3, while the APO12 and DSS05 processes are at level 1. The gap values for the EDM03 and DSS03 processes is 1 level, while the gap values for the APO12 and DSS05 processes are 3 levels. Process improvement recommendations refer to COBIT 2019 best practices.
CLASSIFICATION OF ATRIAL FIBRILLATION IN ECG SIGNAL USING DEEP LEARNING Raihan Mufid Setiadi; Muhammad Fachrurrozi; Muhammad Naufal Rachmatullah
Sriwijaya Journal of Informatics and Applications Vol 4, No 1 (2023)
Publisher : Fakultas Ilmu Komputer Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.36706/sjia.v4i1.53

Abstract

Atrial fibrillation is a type of heart rhythm disorder that most often occurs in the world and can cause death. Atrial fibrillation can be diagnosed by reading an Electrocardiograph (ECG) recording, however, an ECG reading takes a long time and requires specialists to analyze the type of signal pattern. The use of deep learning to classify Atrial Fibrillation in ECG signals was chosen because deep learning has 10% higher performance compared to machine learning methods. In this research, an application for classification of Atrial Fibrillation was developed using the 1-Dimentional Convolutional Neural Network (CNN 1D) method. There are 6 configurations of the 1D CNN model that were developed by varying the configuration on the learning rate and batch size. The best model obtained 100% accuracy, 100% precision, 100% recall, and 100% F1 Score.
Identification Types Of Student Learning Modalities In Physics Subjects With Expert Systems Using Bayes Theorem Method Muhammad Ukkasyah; Yunita Yunita; Kanda Januar Miraswan
Sriwijaya Journal of Informatics and Applications Vol 4, No 1 (2023)
Publisher : Fakultas Ilmu Komputer Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.36706/sjia.v4i1.54

Abstract

Learning modality is a person's way of absorbing and processing information effectively and efficiently. This study aims to determine the results of the identification types of student learning modalities in physics subjects with an expert system using the Bayes theorem method, and the accuracy of the Bayes theorem method in identifying types of student learning modalities in physics subjects. This study uses the Bayes theorem method because it can produce a parameter estimate by combining information from the sample and other information that has been previously available to determine the results of the learning modality. This study uses 21 characteristics of learning modalities, 3 types of learning modalities, and 30 test cases obtained from an expert physics teacher at SMA Sumsel Jaya Palembang. Based on the tests that have been carried out, the results show that the system has an accuracy of 90% in identifying types of student learning modalities in physics subjects. It can be concluded that the Bayes theorem method can be used to identify types of student learning modalities in physics subjects.
Application of Elimination Et Choix Transduisant La Realita (ELECTRE) in Hotel Selection in Palembang City Fadhlan Jiwa Hanuraga; Yunita Yunita; Nabila Rizky Oktadini
Sriwijaya Journal of Informatics and Applications Vol 4, No 2 (2023)
Publisher : Fakultas Ilmu Komputer Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.36706/sjia.v4i2.48

Abstract

This study develops software for hotel selection using Elimination Et Choix Transduisant La Realita (ELECTRE). In Elimination Et Choix Transduisant La Realita (ELECTRE) multi-criteria decision-making is based on the concept of Outranking by using paired alternative comparisons based on criteria. The test is carried out by determining the normalization of the decision matrix, weighting the normalized matrix, determining concordance and discordance, calculating the concordance matrix, calculating the discordance matrix, determining the dominant concordance and discordance matrix, determining the aggregate dominance matrix, and eliminating less favorable alternatives. After calculating the Elimination Et Choix Transduisant La Realita, the system was tested using the Technology Acceptance Model (TAM). The test results as measured by the Technology Acceptance Model (TAM) method obtained a value of 87.06% for the use of technology (perceived usefulness) and 85.33% for the ease of use of technology (perceived ease of use).
Text Generation using Long Short Term Memory to Generate a LinkedIn Post Muhammad Rizqi Assabil; Novi Yusliani; Annisa Darmawahyuni
Sriwijaya Journal of Informatics and Applications Vol 4, No 2 (2023)
Publisher : Fakultas Ilmu Komputer Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.36706/sjia.v4i2.64

Abstract

LinkedIn is one of the most popular sites out there to advertise oneself to potential employer. This study aims to create a good enough text generation model that it can generate a text as if it were made by someone who posts on LinkedIn. This study will use a Neural Network layer called Long Short Term Memory (LSTM) as the main algorithm and the train data consists of actual posts made by users in LinkedIn. LSTM is an algorithm that is created to reduce vanishing and exploding gradient problem in Neural Network. From the result, final accuracy and loss varies. Increasing learning rate from its default value of 0.001, to 0.01, or even 0.1 creates worse model. Meanwhile, increasing dimensions of LSTM will sometimes increases training time or decreases it while not really increasing model performance. In the end, models chosen at the end are models with around 97% of accuracy. From this study, it can be concluded that it is possible to use LSTM to create a text generation model. However, the result might not be too satisfying. For future work, it is advised to instead use a newer model, such as the Transformer model.
Sentiment Analysis Using PSEUDO Nearest Neighbor and TF-IDF TEXT Vectorizer Yogi Pratama; Abdiansyah Abdiansyah; Kanda Januar Miraswan
Sriwijaya Journal of Informatics and Applications Vol 4, No 2 (2023)
Publisher : Fakultas Ilmu Komputer Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.36706/sjia.v4i2.68

Abstract

Twitter is one of the social media that is often used by researchers as an object of research to conduct sentiment analysis. Twitter is also a good indicator in influencing research, problems that often arise in research in the field of sentiment analysis are the many factors such as the use of colloquial or informal language and other factors that can affect sentiment results. To improve the results of sentiment classification, it is necessary to carry out a good information extraction process. One of the word weighting methods resulting from the information extraction process is the TF-IDF Vectorizer. This study examines the effect of the TF-IDF Vectorizer weighting results in sentiment analysis using the Pseudo Nearest Neighbor method. The results of the f-measure classification of sentiment using the TF-IDF Vectorizer at parameters k-2 = 89%, k-3 = 89%, k-4 = 71% and k-5 = 75% while without using the TF-IDF Vectorizer on the parameters k-2 = 90%, k-3 = 92%, k-4 = 84% and k-5 = 89%. From the results of the classification of sentiment analysis that does not use the TF-IDF Vectorizer, the f-measure value is slightly better than using it.
Sign Language A-Z Alphabet Introduction American Sign Language using Support Vector Machine Muhammad Rasuandi; Muhammad Fachrurrozi; Anggina primanita
Sriwijaya Journal of Informatics and Applications Vol 4, No 2 (2023)
Publisher : Fakultas Ilmu Komputer Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.36706/sjia.v4i2.74

Abstract

Deafness is a condition where a person's hearing cannot functionnormally. As a result, these conditions affect ongoing interactions,making it difficult to understand and convey information.Communication problems for the deaf are handled through theintroduction of various forms of sign language, one of which isAmerican Sign Language. Computer Vision-based sign languagerecognition often takes a long time to develop, is less accurate, andcannot be done directly or in real-time. As a result, a solution isneeded to overcome this problem. In the system training process,using the Support Vector Machine method to classify data and testingis carried out using the RBF kernel function with C parameters,namely 10, 50, and 100. The results show that the Support VectorMachine method with a C parameter value of 100 has betterperformance. This is evidenced by the increased accuracy of the RBFC=100 kernel, which is 99%.
Automatic Data Extraction Utilizing Structural Similarity From A Set of Portable Document Format (PDF) Files Hadipurnawan Satria; Anggina Primanita
Sriwijaya Journal of Informatics and Applications Vol 4, No 2 (2023)
Publisher : Fakultas Ilmu Komputer Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.36706/sjia.v4i2.89

Abstract

Instead of storing data in databases, common computer-aided office workers often choose to keep data related to their work in the form of document or report files that they can conveniently and comfortably access with popular off-the-shelf softwares, such as in Portable Document Format (PDF) format files. Their workplaces may actually use databases but they usually do not possess the privilege nor the proficiency to fully utilize them. Said workplaces likely have front-end systems such as Management Information System (MIS) from where workers get their data containing reports or documents.These documents are meant for immediate or presentational uses but workers often keep these files for the data inside which may come to be useful later on. This way, they can manipulate and combine data from one or more report files to suit their work needs, on the occasions that their MIS were not able to fulfill such needs. To do this, workers need to extract data from the report files. However, the files also contain formatting and other contents such as organization banners, signature placeholders, and so on. Extracting data from these files is not easy and workers are often forced to use repeated copy and paste actions to get the data they want. This is not only tedious but also time-consuming and prone to errors. Automatic data extraction is not new, many existing solutions are available but they typically require human guidance to help the data extraction before it can become truly automatic. They may also require certain expertise which can make workers hesitant to use them in the first place. A particular function of an MIS can produce many report files, each containing distinct data, but still structurally similar. If we target all PDF files that come from such same source, in this paper we demonstrated that by exploiting the similarity it is possible to create a fully automatic data extraction system that requires no human guidance. First, a model is generated by analyzing a small sample of PDFs and then the model is used to extract data from all PDF files in the set. Our experiments show that the system can quickly achieve 100% accuracy rate with very few sample files. Though there are occasions where data inside all the PDFs are not sufficiently distinct from each other resulting in lower than 100% accuracy, this can be easily detected and fixed with slight human intervention. In these cases, total no human intervention may not be possible but the amount needed can be significantly reduced.