cover
Contact Name
Tri Anggraeni
Contact Email
tri.anggraeni@mmtc.ac.id
Phone
+62895391032353
Journal Mail Official
jitu@mmtc.ac.id
Editorial Address
Jln. Magelang Km. 6 Sleman, D.I. Yogyakarta, 55284
Location
Kab. sleman,
Daerah istimewa yogyakarta
INDONESIA
Journal of Information Technology and its Utilization
ISSN : 29854067     EISSN : 2654802X     DOI : https://doi.org/10.56873/jitu
To explore scientific developments in the field of information technology and its utilization, including data mining, IoT, Artificial Intelligence, Digital Processing, and Information Systems.
Articles 5 Documents
Search results for , issue "Vol 5 No 1 (2022)" : 5 Documents clear
AN IMPLEMENTATION OF TWO-FACTOR AUTHENTICATION TECHNOLOGY USING TIME-BASED ONE TIME PASSWORD (TOTP) METHOD ON PRIVATE CLOUD STORAGE WEBSITE FOR GUIDANCE AND COUNSELING TEACHER Jerry Olbinson
Journal of Information Technology and Its Utilization Vol 5 No 1 (2022)
Publisher : Sekolah Tinggi Multi Media

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56873/jitu.5.1.3854

Abstract

Guidance and counseling teacher, as an assistant profession, provides counseling services for all students. In providing counseling services, the first activity to be performed is the collection of students’ data. The data collected are confidential. Technological advances in private cloud storage media and two-factor authentication can help the guidance and counseling teacher in the fulfillment of students’ data storage and protection. Cloud storage is one of the solutions in dealing with data management so that it is centralized and facilitates access for system users. It is also balanced with the security technology on an application to guarantee the data security in the application. The security technology of two-factor authentication can be combined with a password and time-based one-time password. A time-based one-time password is often used on an application that uses two-factor authentication with the combination of a secret key and current timestamp. This study aims to create a private cloud storage website for guidance and counseling teacher with two-factor authentication using the time-based one-time password method. This website has been tested with black-box testing and user testing. From the results of the tests, a private cloud storage website with two-factor authentication using the time-based one-time password method was well accepted by the users because it helps the guidance and counseling teacher in sharing data safely, and the results of the average percentage in all questions were 85%.
COMBINING SUPERVISED AND UNSUPERVISED METHODS IN TOURISM VISITOR DATA Weksi Budiaji; Vebriana Vebriana; Juwarin Pancawati
Journal of Information Technology and Its Utilization Vol 5 No 1 (2022)
Publisher : Sekolah Tinggi Multi Media

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56873/jitu.5.1.4659

Abstract

Combining supervised and unsupervised method can assist in the data analysis process. This research aims to apply a supervised method, i.e. Poisson regression, that is followed by an unsupervised method, namely cluster analysis of the visitors in a tourism dataset. The samples were taken 80 persons purposively from the visitors of the Flower Garden X in Serang Regency, Banten Province. The dataset consists of the number of visits, travel cost, income/ stipend per month, gender, age, distance from the place of origin, and perception, which is formed by 11 questions of facilities and services. The Poisson regression was applied in the 30, 40, and 50 bootstrap samples resulted in the perception as the significant features. Then, medoid-based cluster analysis, i.e. pam and simple k-medoids, in the perception dataset was applied. They compared simple matching and cooccurrence distances and were validated via medoid-based shadow value. It grouped the visitors into five clusters as the most suitable number of clusters. The combined methods of supervised and unsupervised provided the cleanliness as the important indicator. The improvement of the tourism object had to be focus on the cleanliness aspect.
NAIVE BAYES ALGORITHM IN HS CODE CLASSIFICATION FOR OPTIMIZING CUSTOMS REVENUE AND MITIGATION OF POTENTIAL RESTITUTION Hafizh Adam Muslim
Journal of Information Technology and Its Utilization Vol 5 No 1 (2022)
Publisher : Sekolah Tinggi Multi Media

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56873/jitu.5.1.4740

Abstract

The Directorate General of Customs and Excise, as a government revenue collector, must maximise import duty receipts each year. One common issue is the return of unpaid import duty and/or administrative punishments in the form of fines based on the objection judgement document. The Tax Court could help you minimise your gross receipts at the Customs Office. Data mining techniques are intended to provide valuable information regarding the HS Code classification technique, which can assist customs agents in determining duties and/or customs values. This study makes use of data from the Notification of Import of Goods at Customs Regional Office XYZ from 2018 to 2020. The Cross-industry Standard Process for Data Mining (CRISP-DM) model is used in this study, and the Naive Bayes Algorithm in Rapidminer 9.10 is used for data classification. According to the model, the calculation accuracy is 99.97 percent, the classification error value is 0.03 percent, and the Kappa coefficient is 0.999..
NEXT WORD PREDICTION USING LSTM Afika Rianti; Suprih Widodo; Atikah Dhani Ayuningtyas; Fadlan Bima Hermawan
Journal of Information Technology and Its Utilization Vol 5 No 1 (2022)
Publisher : Sekolah Tinggi Multi Media

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56873/jitu.5.1.4748

Abstract

Next word prediction which is also called as language modelling is one field of natural language processing that can help to predict the next word. It’s one of the uses of machine learning. Some researchers before had discussed it using different models such as Recurrent Neural Networks and Federated Text Models. Each researcher used their own models to make the prediction and so the researcher here. Researchers here chose to make the model using  Long Short Term Memory (LSTM) model with 200 epoch for the training. For the dataset, the researcher used web scraping. The dataset contains 180 Indonesian destinations from nine provinces. For the libraries, researchers used  tensorflow, keras, numpy, and matplotlib. To download the model in json format, the researcher used tensorflowjs. Then for the tool to code, the researcher used Google Colab. The last result is 8ms/step, loss: 55%, and accuracy: 75% which means it’s good enough and can be used to predict next words.
WEBSITE PHISING DETECTION APPLICATION USING SUPPORT VECTOR MACHINE (SVM) Diki Wahyudi; Muhammad Niswar; A. Ais Prayogi Alimuddin
Journal of Information Technology and Its Utilization Vol 5 No 1 (2022)
Publisher : Sekolah Tinggi Multi Media

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56873/jitu.5.1.4836

Abstract

Phishing is an act to get someone's important information in the form of usernames, passwords, and other sensitive information by providing fake websites that are similar to the original. Phishing (fishing for important information) is a form of criminal act that intends to obtain confidential information from someone, such as usernames, passwords and credit cards, by impersonating a trusted person or business in an official electronic communication, such as electronic mail or instant messages. Along with the development of the use of electronic media, which is followed by the increase in cyber crime, such as this phishing attack. Therefore, to minimize phishing attacks, a system is needed that can detect these attacks. Machine Learning is one method that can be used to create a system that can detect phishing. The data used in this research is 11055 website data, which is divided into two classes, namely "legitimate" and "phishing". This data is then divided using 10-fold cross validation. While the algorithm used is the Support Vector Machine (SVM) algorithm which is compared with the decision tree and k-nearest neighbor algorithms by optimizing the parameters for each algorithm. From the test results in this study, the best system accuracy was 85.71% using SVM kernel polynomial with values of degree 9 and C 2.5.

Page 1 of 1 | Total Record : 5