cover
Contact Name
Yuhefizar
Contact Email
jurnal.resti@gmail.com
Phone
+628126777956
Journal Mail Official
ephi.lintau@gmail.com
Editorial Address
Politeknik Negeri Padang, Kampus Limau Manis, Padang, Indonesia.
Location
,
INDONESIA
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi)
ISSN : 25800760     EISSN : 25800760     DOI : https://doi.org/10.29207/resti.v2i3.606
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) dimaksudkan sebagai media kajian ilmiah hasil penelitian, pemikiran dan kajian analisis-kritis mengenai penelitian Rekayasa Sistem, Teknik Informatika/Teknologi Informasi, Manajemen Informatika dan Sistem Informasi. Sebagai bagian dari semangat menyebarluaskan ilmu pengetahuan hasil dari penelitian dan pemikiran untuk pengabdian pada Masyarakat luas dan sebagai sumber referensi akademisi di bidang Teknologi dan Informasi. Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) menerima artikel ilmiah dengan lingkup penelitian pada: Rekayasa Perangkat Lunak Rekayasa Perangkat Keras Keamanan Informasi Rekayasa Sistem Sistem Pakar Sistem Penunjang Keputusan Data Mining Sistem Kecerdasan Buatan/Artificial Intelligent System Jaringan Komputer Teknik Komputer Pengolahan Citra Algoritma Genetik Sistem Informasi Business Intelligence and Knowledge Management Database System Big Data Internet of Things Enterprise Computing Machine Learning Topik kajian lainnya yang relevan
Articles 29 Documents
Search results for , issue "Vol 4 No 5 (2020): Oktober 2020" : 29 Documents clear
Rekayasa Ulang Sistem Informasi Beasiswa IKAPCR Apriantoni Apriantoni; Indah Lestari; Dadang Syarif Sihabudin Sahid
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 4 No 5 (2020): Oktober 2020
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (691.657 KB) | DOI: 10.29207/resti.v4i5.1889

Abstract

The Politeknik Caltex Riau’s Alumni Association (IKAPCR) has a scholarship information system that contains a Decision Support System (DSS) and scholarship financial management. The IKAPCR scholarship information system requires a reengineering process of several features to meet the needs of system users at the management level. Evaluation of beta testing on IKAPCR scholarship information system shows that the system runs normally as it should based on user requirements, it is informative and innovative, and able to accelerate the scholarship acceptance selection process. While its weaknesses are the process of data integration with the academic system of Politeknik Caltex Riau (PCR), regular reminder donation services via e-mail and SMS, data management on semester payments for scholarship recipients and additional variations graphic info for the analysis process. Therefore, this system needs a reengineering process to improve efficiency at each mechanism of the process. Testing with WebQual of 117 students and 33 PCR alumni, the accuracy of the student respondents was 79.7% which showed that respondents agreed that the quality of the web was good and 80.2% of the alumni respondents indicated that respondents strongly agreed on good web quality.
Implementasi Deteksi Rumor pada Twitter Menggunakan Metode Klasifikasi SVM Annisa Rahmaniar Dwi Pratiwi; Erwin Budi Setiawan
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 4 No 5 (2020): Oktober 2020
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (339.961 KB) | DOI: 10.29207/resti.v4i5.2031

Abstract

Twitter is one of the popular social network sites, that was first launched in 2006. This service allows users to spread real-time information. However, the information obtained is not always based on facts and sometimes deliberately used to spread rumors that cause fear to the public. So detection efforts are needed to overcome and prevent the spread of rumors on Twitter. Much research regarding the detection of rumors but is limited to English and Chinese. In this study, the authors built a system to detect Indonesian-language rumors based on the implementation of the SVM classification and feature selection using the TF-IDF weighting. Data collection was conducted in November 2019 to February 2020 using crawling methods by keywords and manual labeling process. Research data used topics around government and trending with 47,449 records and features combination based on users and tweets. Stages of research include the process of collecting data on the Twitter social networking site which is then carried out preprocessing consists of case-folding, URL removal, normalization, stopwords removal, and stemming. The next stage is feature selection, N-Gram modeling, classification, and evaluation using a confusion matrix. Based on the results of the study, the system gets good performance in the test scenario using 10% of testing data and unigram features with the highest accuracy value of 78.71%. As for features twitter that affected the detection of rumors covering the number of following, the number of like and mention.
Implementasi Deteksi Rumor di Twitter Menggunakan Algoritma J48 Yoan Maria Vianny; Erwin Budi Setiawan
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 4 No 5 (2020): Oktober 2020
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (331.92 KB) | DOI: 10.29207/resti.v4i5.2059

Abstract

The existence of rumors on Twitter has caused a lot of unrest among Indonesians. Unrecognized validity confuses users for that information. In this study, an Indonesian rumor detection system is built by using J48 Algorithm in collaboration with Term Frequency Inverse Document Frequency (TF-IDF) weighting method. Dataset contains 47.449 tweets that have been manually labeled. This study offers new features, namely the number of emoticons in display name, the number of digits in display name, and the number of digits in username. These three new features are used to maximize information about information sources. The highest accuracy is obtained by 75.76% using 90% training data and 1.000 TF-IDF features in 1-gram to 3-gram combinations.
Deteksi Emosi Wicara pada Media On-Demand menggunakan SVM dan LSTM Ainurrochman; Derry Pramono Adi; Agustinus Bimo Gumelar
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 4 No 5 (2020): Oktober 2020
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (620.798 KB) | DOI: 10.29207/resti.v4i5.2073

Abstract

To date, there are many speech data sets with emotional classes, but with impromptu or intentional actors. The native speakers are given a stimulus in each emotion expression. Because natural conversation from secretly recorded daily communication still raises ethical issues, then using voice data that takes samples from movies and podcasts is the most appropriate step to take the best insights from speech. Professional actors are trained to induce the most real emotions close to natural, through the Stanislavski acting method. The speech dataset that meets this qualification is the Human voice Natural Language from On-demand media (HENLO). Within HENLO, there are basic per-emotion audio clips of films and podcasts originating from Media On-Demand, a motion video entertainment media platform with the freedom to play and download at any time. In this paper, we describe the use of sound clips from HENLO, then conduct learning using Support Vector Machine (SVM) and Long Short-Term Memory (LSTM). In these two methods, we found the best strategy by training LSTMs first, then then feeding the model to SVM, with a data split strategy at 80:20 scale. The results of the five training phases show that the last accuracy results increased by more than 17% compared to the first training. These results mean both complement and methods are important for improving classification accuracy.
Perbandingan CART dan Random Forest untuk Deteksi Kanker berbasis Klasifikasi Data Microarray Riska Chairunisa; Adiwijaya; Widi Astuti
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 4 No 5 (2020): Oktober 2020
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (434.683 KB) | DOI: 10.29207/resti.v4i5.2083

Abstract

Cancer is one of the deadliest diseases in the world with a mortality rate of 57,3% in 2018 in Asia. Therefore, early diagnosis is needed to avoid an increase in mortality caused by cancer. As machine learning develops, cancer gene data can be processed using microarrays for early detection of cancer outbreaks. But the problem that microarray has is the number of attributes that are so numerous that it is necessary to do dimensional reduction. To overcome these problems, this study used dimensions reduction Discrete Wavelet Transform (DWT) with Classification and Regression Tree (CART) and Random Forest (RF) as classification method. The purpose of using these two classification methods is to find out which classification method produces the best performance when combined with the DWT dimension reduction. This research use five microarray data, namely Colon Tumors, Breast Cancer, Lung Cancer, Prostate Tumors and Ovarian Cancer from Kent-Ridge Biomedical Dataset. The best accuracy obtained in this study for breast cancer data were 76,92% with CART-DWT, Colon Tumors 90,1% with RF-DWT, lung cancer 100% with RF-DWT, prostate tumors 95,49% with RF-DWT, and ovarian cancer 100% with RF-DWT. From these results it can be concluded that RF-DWT is better than CART-DWT.
Analisis Perbandingan Tools Forensic pada Aplikasi Twitter Menggunakan Metode Digital Forensics Research Workshop Ikhsan Zuhriyanto; Anton Yudhana; Imam Riadi
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 4 No 5 (2020): Oktober 2020
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1126.66 KB) | DOI: 10.29207/resti.v4i5.2152

Abstract

Current crime is increasing, one of which is the crime of using social media, although no crime does not leave digital evidence. Twitter application is a social media that is widely used by its users. Acts of crime such as fraud, insults, hate speech, and other crimes lately use many social media applications, especially Twitter. This research was conducted to find forensic evidence on the social media Twitter application that is accessed using a smartphone application using the Digital Forensics Research Workshop (DFRWS) method. These digital forensic stages include identification, preservation, collection, examination, analysis, and presentation in finding digital evidence of crime using the MOBILedit Forensic Express software and Belkasoft Evidence Center. Digital evidence sought on smartphones can be found using case scenarios and 16 variables that have been created so that digital proof in the form of smartphone specifications, Twitter accounts, application versions, conversations in the way of messages and status. This study's results indicate that MOBILedit Forensic Express digital forensic software is better with an accuracy rate of 85.75% while Belkasoft Evidence Center is 43.75%.
Komparatif Analisis Keamanan Aplikasi Instant Messaging Berbasis Web Imam Riadi; Rusydi Umar; Muhammad Abdul Aziz
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 4 No 5 (2020): Oktober 2020
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (530.857 KB) | DOI: 10.29207/resti.v4i5.2213

Abstract

Web-based instant messaging applications vulnerability has become one of the main concerns for its users in line with the increasing number of cybercrimes that occur on social media. This research was conducted to determine the comparability of the vulnerability value of the web-based WhatsApp, Telegram, and Skype applications using the Association of Chief Police Officers (ACPO) method. Digital artifacts in the form of text messages, picture messages, video messages, telephone numbers, and user IDs have been acquired in this research process using FTK imager and OSForensic tools. The results of the study using the FTK imager and OSForensic tools show that the web-based Skype application has a vulnerability value of 92%, while WhatsApp and Web-based Telegram have the same vulnerability value with 67% each based on all digital artifacts that successfully acquired.
Investigasi Bukti Digital Optical Drive Menggunakan Metode National Institute of Standard and Technology (NIST) Imam Riadi; Abdul Fadlil; Muhammad Immawan Aulia
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 4 No 5 (2020): Oktober 2020
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (628.662 KB) | DOI: 10.29207/resti.v4i5.2224

Abstract

DVD-R is a type of optical drive that can store data in one burning process. However, there is a feature that allows erasing data in a read-only type, namely multisession. The research was conducted to implement the data acquisition process which was deleted from a DVD-R using Autopsy forensic tools and FTK Imager. The National Institute of Standards and Technology (NIST) is a method commonly used in digital forensics in scope storage with stages, namely collection, examination, analysis, and reporting. The acquisition results from Autopsy and FTK-Imager show the same results as the original file before being deleted, validated by matching the hash value. Based on the results obtained from the analysis and presentation stages, it can be concluded from the ten files resulting from data acquisition using the FTK Imager and Autopsy tools on DVD-R. FTK Imager detects two file systems, namely ISO9660 and Joliet, while the Autopsy tool only has one file system, namely UDF. The findings on the FTK Imager tool successfully acquired ten files with matching hash values and Autopsy Tools detected seven files with did not find three files with extensions, *.MOV, *.exe, *.rar. Based on the results of the comparative analysis of the performance test carried out on the FTK Imager, it got a value of 100% because it managed to find all deleted files and Autopsy got a value of 70% because 3 files were not detected because 3 files were not detected and the hash values ​​were empty with the extensions * .exe, * .rar and *.MOV. This is because the Autopsy tool cannot detect the three file extensions.
Desain Dempster Shafer dan Fuzzy Expert System dalam Mendeteksi Dini Penyakit Stroke laurentinus laurentinus
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 4 No 5 (2020): Oktober 2020
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (773.995 KB) | DOI: 10.29207/resti.v4i5.2227

Abstract

The increasing population in Indonesia, which is 265 million people in 2018, causes an increase in the community's disease sufferers. Unfortunately, the number of hospitals in the area has not increased even though the population continues to grow, which impacts the community's lack of information and knowledge in dealing with some serious diseases such as stroke that attacks quickly. Stroke is the leading cause of disability and the number two cause of death in the world where 6.2 million people died in 2015 and is a complex medical problem that requires the diagnosis of a neurologist or internist. Still, not all doctors are in the district and provide services with fast. Temporary stroke symptoms are called transient ischemic attacks (TIA), which are warning signs before having a stroke, it requires how to recognize the signs of a stroke early and treat it as a medical emergency. Based on this problem, it is needed an expert system design that can diagnose stroke early and provide information about stroke to the community based on expert sources with an android mobile phone, making it accessible to the broader community, including in the district. The system design uses the Dempster Shafer Method to measure the uncertainty of 20 stroke symptoms. The disease slices outcome will produce a percentage of the likelihood of stroke, hypertension / high blood pressure, fever, and heart disease. As well as Fuzzy Logic as logical logic in processing 9 patient's medical history. The authors combined the two methods in providing a stroke diagnosis based on symptoms and patient history and then evaluated using several metrics, including accuracy, precision, sensitivity (recall), F-measure (F1 score), and specificity so that an expert system score was obtained of 0.786 which shows good expert system performance.
Analisis Sentimen Pada Twitter KAI Menggunakan Metode Multiclass Support Vector Machine (SVM) Dhina Nur Fitriana; Yuliant Sibaroni
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 4 No 5 (2020): Oktober 2020
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (460.616 KB) | DOI: 10.29207/resti.v4i5.2231

Abstract

Information in form of unstructured texts is increasing and becoming commonplace for its existence on the internet. This information is easily found and utilized by business people or companies through social media. One of them is Twitter. Twitter is ranked 6th as a social media that is widely accessed today. The use of Twitter has the disadvantage of unstructured and large data. Consequently, it is difficult for business people or companies to know opinion towards service with limited resources. To Make it easier for businesses know the public's sentiment for better service in the future, public sentiment on Twitter needs to be classified as positive, neutral, and negative. The Multiclass Support Vector Machine (SVM) method is a supervised learning classification method that handles three classes classification. This paper uses One Against All (OAA) approach as a method to determine the class. This paper contains the results of classifying OAA Multiclass SVM methods with five different weighting features unigram, bigram, trigram, unigram+ bigram, and word cloud for analyzing tweet data, finding the best accuracy and important feature when processed with large data. The highest accuracy is the unigram TF-IDF model combined with the OAA Multiclass SVM with gamma 0.7 is 80.59.

Page 1 of 3 | Total Record : 29


Filter by Year

2020 2020