cover
Contact Name
Dr. Dian Palupi Rini
Contact Email
dprini@unsri.ac.id
Phone
-
Journal Mail Official
sjia@unsri.ac.id
Editorial Address
Fakultas Ilmu Komputer UNSRI
Location
Kab. ogan ilir,
Sumatera selatan
INDONESIA
Sriwijaya Journal of Informatics and Applications
Published by Universitas Sriwijaya
ISSN : -     EISSN : 28072391     DOI : -
Core Subject :
Sriwijaya Journal of Informatics and Applcations (SJIA) is a scientific periodical researchs articles of the Informatics Departement Universitas Sriwijaya. This Journal is an open access journal for scientists and engineers in informatics and Applcations area that provides online publication (two times a year). SJIA offers a good opportunity for academics and industry professionals to publish high quality and refereed papers in various areas of Informatics e.q., Machine Learning & Soft Computing, Data Mining & Big Data Analytics, Computer Vision and Pattern Recognition and Automated Reasoning, and Distributed and security System
Arjuna Subject : -
Articles 5 Documents
Search results for , issue "Vol 4, No 2 (2023)" : 5 Documents clear
Application of Elimination Et Choix Transduisant La Realita (ELECTRE) in Hotel Selection in Palembang City Fadhlan Jiwa Hanuraga; Yunita Yunita; Nabila Rizky Oktadini
Sriwijaya Journal of Informatics and Applications Vol 4, No 2 (2023)
Publisher : Fakultas Ilmu Komputer Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.36706/sjia.v4i2.48

Abstract

This study develops software for hotel selection using Elimination Et Choix Transduisant La Realita (ELECTRE). In Elimination Et Choix Transduisant La Realita (ELECTRE) multi-criteria decision-making is based on the concept of Outranking by using paired alternative comparisons based on criteria. The test is carried out by determining the normalization of the decision matrix, weighting the normalized matrix, determining concordance and discordance, calculating the concordance matrix, calculating the discordance matrix, determining the dominant concordance and discordance matrix, determining the aggregate dominance matrix, and eliminating less favorable alternatives. After calculating the Elimination Et Choix Transduisant La Realita, the system was tested using the Technology Acceptance Model (TAM). The test results as measured by the Technology Acceptance Model (TAM) method obtained a value of 87.06% for the use of technology (perceived usefulness) and 85.33% for the ease of use of technology (perceived ease of use).
Text Generation using Long Short Term Memory to Generate a LinkedIn Post Muhammad Rizqi Assabil; Novi Yusliani; Annisa Darmawahyuni
Sriwijaya Journal of Informatics and Applications Vol 4, No 2 (2023)
Publisher : Fakultas Ilmu Komputer Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.36706/sjia.v4i2.64

Abstract

LinkedIn is one of the most popular sites out there to advertise oneself to potential employer. This study aims to create a good enough text generation model that it can generate a text as if it were made by someone who posts on LinkedIn. This study will use a Neural Network layer called Long Short Term Memory (LSTM) as the main algorithm and the train data consists of actual posts made by users in LinkedIn. LSTM is an algorithm that is created to reduce vanishing and exploding gradient problem in Neural Network. From the result, final accuracy and loss varies. Increasing learning rate from its default value of 0.001, to 0.01, or even 0.1 creates worse model. Meanwhile, increasing dimensions of LSTM will sometimes increases training time or decreases it while not really increasing model performance. In the end, models chosen at the end are models with around 97% of accuracy. From this study, it can be concluded that it is possible to use LSTM to create a text generation model. However, the result might not be too satisfying. For future work, it is advised to instead use a newer model, such as the Transformer model.
Sentiment Analysis Using PSEUDO Nearest Neighbor and TF-IDF TEXT Vectorizer Yogi Pratama; Abdiansyah Abdiansyah; Kanda Januar Miraswan
Sriwijaya Journal of Informatics and Applications Vol 4, No 2 (2023)
Publisher : Fakultas Ilmu Komputer Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.36706/sjia.v4i2.68

Abstract

Twitter is one of the social media that is often used by researchers as an object of research to conduct sentiment analysis. Twitter is also a good indicator in influencing research, problems that often arise in research in the field of sentiment analysis are the many factors such as the use of colloquial or informal language and other factors that can affect sentiment results. To improve the results of sentiment classification, it is necessary to carry out a good information extraction process. One of the word weighting methods resulting from the information extraction process is the TF-IDF Vectorizer. This study examines the effect of the TF-IDF Vectorizer weighting results in sentiment analysis using the Pseudo Nearest Neighbor method. The results of the f-measure classification of sentiment using the TF-IDF Vectorizer at parameters k-2 = 89%, k-3 = 89%, k-4 = 71% and k-5 = 75% while without using the TF-IDF Vectorizer on the parameters k-2 = 90%, k-3 = 92%, k-4 = 84% and k-5 = 89%. From the results of the classification of sentiment analysis that does not use the TF-IDF Vectorizer, the f-measure value is slightly better than using it.
Sign Language A-Z Alphabet Introduction American Sign Language using Support Vector Machine Muhammad Rasuandi; Muhammad Fachrurrozi; Anggina primanita
Sriwijaya Journal of Informatics and Applications Vol 4, No 2 (2023)
Publisher : Fakultas Ilmu Komputer Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.36706/sjia.v4i2.74

Abstract

Deafness is a condition where a person's hearing cannot functionnormally. As a result, these conditions affect ongoing interactions,making it difficult to understand and convey information.Communication problems for the deaf are handled through theintroduction of various forms of sign language, one of which isAmerican Sign Language. Computer Vision-based sign languagerecognition often takes a long time to develop, is less accurate, andcannot be done directly or in real-time. As a result, a solution isneeded to overcome this problem. In the system training process,using the Support Vector Machine method to classify data and testingis carried out using the RBF kernel function with C parameters,namely 10, 50, and 100. The results show that the Support VectorMachine method with a C parameter value of 100 has betterperformance. This is evidenced by the increased accuracy of the RBFC=100 kernel, which is 99%.
Automatic Data Extraction Utilizing Structural Similarity From A Set of Portable Document Format (PDF) Files Hadipurnawan Satria; Anggina Primanita
Sriwijaya Journal of Informatics and Applications Vol 4, No 2 (2023)
Publisher : Fakultas Ilmu Komputer Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.36706/sjia.v4i2.89

Abstract

Instead of storing data in databases, common computer-aided office workers often choose to keep data related to their work in the form of document or report files that they can conveniently and comfortably access with popular off-the-shelf softwares, such as in Portable Document Format (PDF) format files. Their workplaces may actually use databases but they usually do not possess the privilege nor the proficiency to fully utilize them. Said workplaces likely have front-end systems such as Management Information System (MIS) from where workers get their data containing reports or documents.These documents are meant for immediate or presentational uses but workers often keep these files for the data inside which may come to be useful later on. This way, they can manipulate and combine data from one or more report files to suit their work needs, on the occasions that their MIS were not able to fulfill such needs. To do this, workers need to extract data from the report files. However, the files also contain formatting and other contents such as organization banners, signature placeholders, and so on. Extracting data from these files is not easy and workers are often forced to use repeated copy and paste actions to get the data they want. This is not only tedious but also time-consuming and prone to errors. Automatic data extraction is not new, many existing solutions are available but they typically require human guidance to help the data extraction before it can become truly automatic. They may also require certain expertise which can make workers hesitant to use them in the first place. A particular function of an MIS can produce many report files, each containing distinct data, but still structurally similar. If we target all PDF files that come from such same source, in this paper we demonstrated that by exploiting the similarity it is possible to create a fully automatic data extraction system that requires no human guidance. First, a model is generated by analyzing a small sample of PDFs and then the model is used to extract data from all PDF files in the set. Our experiments show that the system can quickly achieve 100% accuracy rate with very few sample files. Though there are occasions where data inside all the PDFs are not sufficiently distinct from each other resulting in lower than 100% accuracy, this can be easily detected and fixed with slight human intervention. In these cases, total no human intervention may not be possible but the amount needed can be significantly reduced. 

Page 1 of 1 | Total Record : 5