cover
Contact Name
Yuhefizar
Contact Email
jurnal.resti@gmail.com
Phone
+628126777956
Journal Mail Official
ephi.lintau@gmail.com
Editorial Address
Politeknik Negeri Padang, Kampus Limau Manis, Padang, Indonesia.
Location
,
INDONESIA
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi)
ISSN : 25800760     EISSN : 25800760     DOI : https://doi.org/10.29207/resti.v2i3.606
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) dimaksudkan sebagai media kajian ilmiah hasil penelitian, pemikiran dan kajian analisis-kritis mengenai penelitian Rekayasa Sistem, Teknik Informatika/Teknologi Informasi, Manajemen Informatika dan Sistem Informasi. Sebagai bagian dari semangat menyebarluaskan ilmu pengetahuan hasil dari penelitian dan pemikiran untuk pengabdian pada Masyarakat luas dan sebagai sumber referensi akademisi di bidang Teknologi dan Informasi. Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) menerima artikel ilmiah dengan lingkup penelitian pada: Rekayasa Perangkat Lunak Rekayasa Perangkat Keras Keamanan Informasi Rekayasa Sistem Sistem Pakar Sistem Penunjang Keputusan Data Mining Sistem Kecerdasan Buatan/Artificial Intelligent System Jaringan Komputer Teknik Komputer Pengolahan Citra Algoritma Genetik Sistem Informasi Business Intelligence and Knowledge Management Database System Big Data Internet of Things Enterprise Computing Machine Learning Topik kajian lainnya yang relevan
Articles 1,046 Documents
Sistem Deteksi Hoax pada Twitter dengan Metode Klasifikasi Feed-Forward dan Back-Propagation Neural Networks Crisanadenta Wintang Kencana; Erwin Budi Setiawan; Isman Kurniawan
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 4 No 4 (2020): Agustus 2020
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (620.569 KB) | DOI: 10.29207/resti.v4i4.2038

Abstract

Social media is one of the ways to connect every individual in the world. It also used by irresponsible people to spread a hoax. Hoax is false news that is made as if it is true. It may cause anxiety and panic in society. It can affect the social and political conditions. This era, the most popular social media is Twitter. It is a place for sharing information and users around the world can share and receive news in short messages or called tweet. Hoax detection gained significant interest in the last decade. Existing hoax detection methods are based on either news-content or social-context using user-based features. In this study, we present a hoax detection based on FF & BP neural networks. In the developing of it, we used two vectorization methods, TF-IDF and Word2Vec. Our model is designed to automatically learn features for hoax news classification through several hidden layers built into the neural network. The neural network is actually using the ability of the human brain that is able to provide stimulation, process, and output. It works by the neuron to process every information that enters, then is processed through a network connection, and will continue learning to produce abilities to do classification. Our proposed model would be helpful to provide a better solution for hoax detection. Data collection obtained through crawling used Twitter API and retrieve data according to the keywords and hashtags. The neural networks highest accuracy obtained using TF-IDF by 78.76%. We also found that data quality affects the performance.
Implementasi Deteksi Rumor di Twitter Menggunakan Algoritma J48 Yoan Maria Vianny; Erwin Budi Setiawan
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 4 No 5 (2020): Oktober 2020
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (331.92 KB) | DOI: 10.29207/resti.v4i5.2059

Abstract

The existence of rumors on Twitter has caused a lot of unrest among Indonesians. Unrecognized validity confuses users for that information. In this study, an Indonesian rumor detection system is built by using J48 Algorithm in collaboration with Term Frequency Inverse Document Frequency (TF-IDF) weighting method. Dataset contains 47.449 tweets that have been manually labeled. This study offers new features, namely the number of emoticons in display name, the number of digits in display name, and the number of digits in username. These three new features are used to maximize information about information sources. The highest accuracy is obtained by 75.76% using 90% training data and 1.000 TF-IDF features in 1-gram to 3-gram combinations.
Aplikasi Kombinasi Heuristik dalam Kerangka Hyper-Heuristic untuk Permasalahan Penjadwalan Ujian Gabriella Icasia; Raras Tyasnurita; Etria Sepwardhani Purba
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 4 No 4 (2020): Agustus 2020
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (480.765 KB) | DOI: 10.29207/resti.v4i4.2066

Abstract

Examination Timetabling Problem is one of the optimization and combinatorial problems. It is proved to be a non-deterministic polynomial (NP)-hard problem. On a large scale of data, the examination timetabling problem becomes a complex problem and takes time if it solved manually. Therefore, heuristics exist to provide reasonable enough solutions and meet the constraints of the problem. In this study, a real-world dataset of Examination Timetabling (Toronto dataset) is solved using a Hill-Climbing and Tabu Search algorithm. Different from the approach in the literature, Tabu Search is a meta-heuristic method, but we implemented a Tabu Search within the hyper-heuristic framework. The main objective of this study is to provide a better understanding of the application of Hill-Climbing and Tabu Search in hyper-heuristics to solve timetabling problems. The results of the experiments show that Hill-Climbing and Tabu Search succeeded in automating the timetabling process by reducing the penalty 18-65% from the initial solution. Besides, we tested the algorithms within 10,000-100,000 iterations, and the results were compared with a previous study. Most of the solutions generated from this experiment are better compared to the previous study that also used Tabu Search algorithm.
Deteksi Emosi Wicara pada Media On-Demand menggunakan SVM dan LSTM Ainurrochman; Derry Pramono Adi; Agustinus Bimo Gumelar
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 4 No 5 (2020): Oktober 2020
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (620.798 KB) | DOI: 10.29207/resti.v4i5.2073

Abstract

To date, there are many speech data sets with emotional classes, but with impromptu or intentional actors. The native speakers are given a stimulus in each emotion expression. Because natural conversation from secretly recorded daily communication still raises ethical issues, then using voice data that takes samples from movies and podcasts is the most appropriate step to take the best insights from speech. Professional actors are trained to induce the most real emotions close to natural, through the Stanislavski acting method. The speech dataset that meets this qualification is the Human voice Natural Language from On-demand media (HENLO). Within HENLO, there are basic per-emotion audio clips of films and podcasts originating from Media On-Demand, a motion video entertainment media platform with the freedom to play and download at any time. In this paper, we describe the use of sound clips from HENLO, then conduct learning using Support Vector Machine (SVM) and Long Short-Term Memory (LSTM). In these two methods, we found the best strategy by training LSTMs first, then then feeding the model to SVM, with a data split strategy at 80:20 scale. The results of the five training phases show that the last accuracy results increased by more than 17% compared to the first training. These results mean both complement and methods are important for improving classification accuracy.
Perbandingan CART dan Random Forest untuk Deteksi Kanker berbasis Klasifikasi Data Microarray Riska Chairunisa; Adiwijaya; Widi Astuti
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 4 No 5 (2020): Oktober 2020
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (434.683 KB) | DOI: 10.29207/resti.v4i5.2083

Abstract

Cancer is one of the deadliest diseases in the world with a mortality rate of 57,3% in 2018 in Asia. Therefore, early diagnosis is needed to avoid an increase in mortality caused by cancer. As machine learning develops, cancer gene data can be processed using microarrays for early detection of cancer outbreaks. But the problem that microarray has is the number of attributes that are so numerous that it is necessary to do dimensional reduction. To overcome these problems, this study used dimensions reduction Discrete Wavelet Transform (DWT) with Classification and Regression Tree (CART) and Random Forest (RF) as classification method. The purpose of using these two classification methods is to find out which classification method produces the best performance when combined with the DWT dimension reduction. This research use five microarray data, namely Colon Tumors, Breast Cancer, Lung Cancer, Prostate Tumors and Ovarian Cancer from Kent-Ridge Biomedical Dataset. The best accuracy obtained in this study for breast cancer data were 76,92% with CART-DWT, Colon Tumors 90,1% with RF-DWT, lung cancer 100% with RF-DWT, prostate tumors 95,49% with RF-DWT, and ovarian cancer 100% with RF-DWT. From these results it can be concluded that RF-DWT is better than CART-DWT.
Pengembangan Aplikasi Virtual Reality dengan Model ADDIE untuk Calon Tenaga Pendidik Anak dengan Autisme Dhomas Hatta Fudholi; Rahadian Kurniawan; Dimas Panji Eka Jalaputra; Izzati Muhimmah
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 4 No 4 (2020): Agustus 2020
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (782.079 KB) | DOI: 10.29207/resti.v4i4.2092

Abstract

Knowledge is needed for children with special needs to support their quality of life. This is a challenge for prospective educators / prospective teachers. A deeper knowledge is needed to really understand children with special needs. This research is carried out to develop a skill simulator application for autistic child’s prospective educator using Virtual Reality technology. This application will be used as a teaching medium which incorporates motion sensor tools. The sensors will make the virtual application looks realistic. The application was developed using the ADDIE method (Analysis, Design, Development, Implementation and Evaluation). The application development begins with discovering the characteristic of autistic children. This is done to formulate the learning materials. The knowledge base of the autistic children was obtained from the Sekolah Luar Biasa (SLB). By using the obtained knowledge, storyboard was designed and implemented. The developed application has been evaluated by 16 prospective child educators with autism and two professional experts. In general, the application can help prospective educators understand the characteristics of children with autism. Moreover, it provides a safe and pleasant teaching skill practice for the prospective educators.
Analisis Recovery Bukti Digital Skype berbasis Smartphone Android Menggunakan Framework NIST Anton Yudhana; Abdul Fadlil; Muhammad Rizki Setyawan
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 4 No 4 (2020): Agustus 2020
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1291.546 KB) | DOI: 10.29207/resti.v4i4.2093

Abstract

Cybercrime is an activity utilizing electronic devices and network technology as tools or media to commit crimes. One of them uses the Skype application that is installed on the smartphone. In finding evidence from a cybercrime case, a forensic activity known as digital forensic must be carried out. This study aims to recover digital evidence that has been erased using the NIST framework and forensic tools such as Oxygen and Belkasoft. The results of digital evidence recovery from smartphone Samsung J2 in the removal scenario via the application manager, the Oxygen tool cannot recover deleted data and the percentage of success using Belkasoft is 26%. While the results of data recovery with the manual removal method the percentage of success using Oxygen was 63% and Belkasoft was 44%. Digital evidence recovery results from smartphones Andromax A on the erase scenario through the application manager, Oxygen and Belkasoft tools cannot recover deleted data. While manual removal of Oxygen by 61% and Belkasoft cannot restore data. It can be concluded the results of data recovery from both smartphones that are used according to the erasure method through the application manager, Belkasoft has better performance than Oxygen, and data recovery according to the method of erasing manually, Oxygen has better performance than Belkasoft.
Penentuan Lokasi Industri Menggunakan Metode WASPAS Dengan Data Spasial Sebagai Data Kriteria Agusta Praba Ristadi Pinem; Siti Asmiatun; Astrid Novita Putri
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 4 No 4 (2020): Agustus 2020
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (519.479 KB) | DOI: 10.29207/resti.v4i4.2094

Abstract

Today, the development of the use of spatial data is not only used for information geographic or transportation. But also can be used for site selection with integrating decision support system methods. Generated information can help in making decisions and meet the expected aspects. One method that can be used to support the decision making process is the Weighted Aggregated Sum Product Assessment (WASPAS). WASPAS is included in Multi Criteria Decision Making which can produce selected information from the data or criteria used. This study uses the WASPAS method as a determinant of strategic industrial locations by spatial data collection. In determining strategic industrial locations, WASPAS uses several different criteria and weights for each criterion. The WASPAS method can produce precise information related to the determination of strategic industrial locations. The results of the Spearman Rating trial with data on industrial locations in the city of Semarang show a strong conformity, as seen from the resulting compatibility value of 1.0. The results obtained from this study are the establishment of a system model that supports the decision to determine the location of the industry using the WASPAS method.
Visualization of Sales Field Activity Data Using Operations Dashboard for ArcGIS Angelia Destriana; Kristoko Dwi Hartomo; Hanna Prillysca Chernovita
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 4 No 4 (2020): Agustus 2020
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (670.028 KB) | DOI: 10.29207/resti.v4i4.2096

Abstract

The process of manually recording and visualizing data that has high and complicated transaction rates, is no longer relevant for analyzing errors that often occur in company. The impact is the information generated becomes inaccurate in decision making. Problems that are often experienced by companies in visualizing data are non-real-time, non-integrated data, and irregular data visualization. In an effort to minimize problems such as real-time, not integrated, and irregular data visualization, the role of data visualization is needed to improve company performance. Based on these problems the researcher provides a solution, namely designing a geographic information system visualizing sales field activity data, by providing information about visualizing sales field activity data in real-time through the widget contained in the Operations Dashboard for ArcGIS (ODA). The stages of this research are the study of literature, entering polygon zones, making application-based forms, making application-based coordination, inputting dummy data, collecting data, making maps, making data visualization applications, and analyzing data. The results of this study can monitor workers who have good performance that is seen from the indicator completed most of each worker shows that geobiz_admin has completed as many as 6 completed. And can know the movement (tracking) of workers who come out of the work zone, from the analysis there is one mobile worker who came out of Zone II and entered Zone I and Zone III.
Pemanfaatan Optical Character Recognition Dan Text Feature Extraction Untuk Membangun Basisdata Pengaduan Tenaga Kerja Yan Puspitarani; Yenie Syukriyah
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 4 No 4 (2020): Agustus 2020
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (242.186 KB) | DOI: 10.29207/resti.v4i4.2107

Abstract

The examination of complaints of labor violations is part of the main activity of the labor inspection section within the Department of Manpower. Monitors will examine companies that are considered to have violated labor laws based on a letter of complaint sent by the relevant union organization or legal aid agency. The easy way to communicate at this time, making the submission of complaint letters can be directly sent in the form of images through electronic media such as whatsapp or email. This makes it difficult for administrative staff to recapitulate incoming complaints because they have to read and enter data manually into the system. Therefore, this research was conducted to create a system that utilizes OCR technology and text feature extraction to be able to input complaints data automatically. This research resulted in a prototype of letter input and a database of letter storage that can be further utilized for Data Mining and Business Intelligent. OCR implementation is done by using the Tesseract library while the text feature selection utilizes the Natural Language Toolkit (NLTK) library. The results of testing of the prototype showed an accuracy of 66.7% of the OCR results and 91.67% of the manually typed letters.

Page 28 of 105 | Total Record : 1046