Jurnal Teknik Informatika (JUTIF)
Jurnal Teknik Informatika (JUTIF) is an Indonesian national journal, publishes high-quality research papers in the broad field of Informatics, Information Systems and Computer Science, which encompasses software engineering, information system development, computer systems, computer network, algorithms and computation, and social impact of information and telecommunication technology. Jurnal Teknik Informatika (JUTIF) is published by Informatics Department, Universitas Jenderal Soedirman twice a year, in June and December. All submissions are double-blind reviewed by peer reviewers. All papers must be submitted in BAHASA INDONESIA. JUTIF has P-ISSN : 2723-3863 and E-ISSN : 2723-3871. The journal accepts scientific research articles, review articles, and final project reports from the following fields : Computer systems organization : Computer architecture, embedded system, real-time computing 1. Networks : Network architecture, network protocol, network components, network performance evaluation, network service 2. Security : Cryptography, security services, intrusion detection system, hardware security, network security, information security, application security 3. Software organization : Interpreter, Middleware, Virtual machine, Operating system, Software quality 4. Software notations and tools : Programming paradigm, Programming language, Domain-specific language, Modeling language, Software framework, Integrated development environment 5. Software development : Software development process, Requirements analysis, Software design, Software construction, Software deployment, Software maintenance, Programming team, Open-source model 6. Theory of computation : Model of computation, Computational complexity 7. Algorithms : Algorithm design, Analysis of algorithms 8. Mathematics of computing : Discrete mathematics, Mathematical software, Information theory 9. Information systems : Database management system, Information storage systems, Enterprise information system, Social information systems, Geographic information system, Decision support system, Process control system, Multimedia information system, Data mining, Digital library, Computing platform, Digital marketing, World Wide Web, Information retrieval Human-computer interaction, Interaction design, Social computing, Ubiquitous computing, Visualization, Accessibility 10. Concurrency : Concurrent computing, Parallel computing, Distributed computing 11. Artificial intelligence : Natural language processing, Knowledge representation and reasoning, Computer vision, Automated planning and scheduling, Search methodology, Control method, Philosophy of artificial intelligence, Distributed artificial intelligence 12. Machine learning : Supervised learning, Unsupervised learning, Reinforcement learning, Multi-task learning 13. Graphics : Animation, Rendering, Image manipulation, Graphics processing unit, Mixed reality, Virtual reality, Image compression, Solid modeling 14. Applied computing : E-commerce, Enterprise software, Electronic publishing, Cyberwarfare, Electronic voting, Video game, Word processing, Operations research, Educational technology, Document management.
Articles
29 Documents
Search results for
, issue
"Vol. 5 No. 3 (2024): JUTIF Volume 5, Number 3, June 2024"
:
29 Documents
clear
ANALYSIS OF THE EFFECTIVENESS OF POLYNOMIAL FIT SMOTE MESH ON IMBALANCE DATASET FOR BANK CUSTOMER CHURN PREDICTION WITH XGBOOST AND BAYESIAN OPTIMIZATION
Faran, Jhiro;
Triayudi, Agung
Jurnal Teknik Informatika (Jutif) Vol. 5 No. 3 (2024): JUTIF Volume 5, Number 3, June 2024
Publisher : Informatika, Universitas Jenderal Soedirman
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.52436/1.jutif.2024.5.3.1284
The case of churn in the banking industry, namely customers who leave or no longer use bank services, is a serious problem that requires an appropriate solution. The aim of this research is to predict churn and take appropriate preventive actions using machine learning. The dataset contains 10,000 bank customer data with 14 relevant features. Only about 20% of customers experience churn, creating a data imbalance problem in classification. To overcome data imbalances, the SMOTE oversampling technique was applied. Also introduced was the development of the SMOTE technique, namely, Polynomial Fit SMOTE Mesh (PFSM). PFSM works by combining each point in the data with a linear function and producing synthetic data at each connected distance. Experimental results show that the model developed using PFSM and optimized with Bayesian Optimization for the XGBoost algorithm achieved 86.1% accuracy, 70.87% precision, 53.81% recall, and 61.17% F-score. This indicates that the approach is successful in improving predictive capabilities and identifying potential customers for churn earlier. This research has significant relevance in the banking industry, helping banks to safeguard their customers and improve banking business performance..
TEMPORAL SPATIAL PROPERTY PROFILING AND IDENTIFICATION OF EARTHQUAKE PRONE AREAS USING ST-DBSCAN AND K-MEANS CLUSTERING
Samsudin, Angga Radlisa;
Fudholi, Dhomas Hatta;
Iswari, Lizda
Jurnal Teknik Informatika (Jutif) Vol. 5 No. 3 (2024): JUTIF Volume 5, Number 3, June 2024
Publisher : Informatika, Universitas Jenderal Soedirman
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.52436/1.jutif.2024.5.3.1293
Indonesia is a country located at the confluence of three major tectonic plates, namely Indo-Australia, Eurasia, and the Pacific so that earthquakes often occur, one of which is in West Nusa Tenggara Province. One way to accelerate the disaster mitigation process is to analyze earthquake occurrence based on spatial temporal aspects. This study uses data from BMKG NTB Province during 2018 with a total of 3,699 earthquake events which are then analyzed using ST-DBSCAN and K-Means. ST-DBSCAN analysis was used to determine earthquake prone areas based on the date and location of the event, while k-means used the depth and magnitude of the earthquake. The results show that the distribution pattern of earthquakes in the NTB region has a stationary pattern and there are similar prone areas based on the location and time of occurrence as well as the strength and depth of the earthquake. The ST-DBSCAN method using latitude and longitude attributes produces one cluster that covers 96.33% of the total data. Meanwhile, K-Means using the depth and magnitude attributes produced four clusters. The four clusters were obtained from the cluster density using the silhouette score value between -1 and 1. The K-means analysis used a silhouette score result of 18.527 which was found in cluster 1. Earthquake prone areas in the distribution of earthquakes or types of earthquakes are located in Gangga and Bayan sub-districts of North Lombok and in Sambelia and Sembalun sub-districts of East Lombok. The sub-district with the most frequent earthquakes is Sambelia sub-district with 112 earthquakes. Then the strength of the largest earthquakes on average occurred in Gangga sub-district with magnitudes of 4 to 6.2 SR with shallow earthquake types. The prone area is located at the foot of the mountain and directly adjacent to the ocean.ith shallow earthquake types. The Prone area is at the foot of a mountain and directly adjacent to the ocean.
FISH FRESHNESS PREDICTION WITH CONVOLUTIONAL NEURAL NETWORK METHOD BASED ON FISH EYE IMAGE ANALYSIS
Mahendra, Robby;
Faurina, Ruvita
Jurnal Teknik Informatika (Jutif) Vol. 5 No. 3 (2024): JUTIF Volume 5, Number 3, June 2024
Publisher : Informatika, Universitas Jenderal Soedirman
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.52436/1.jutif.2024.5.3.1351
The potential for fish resources in Bengkulu waters is abundant, but quality must be maintained for safety and selling value. Changes in the skin, eyes, gills and flesh of fish indicate a decrease in quality due to enzyme, chemical and bacterial activity. The process of sorting fish by fishermen or sellers is still often done manually, which is sometimes inaccurate due to limited vision. With advances in computing technology, classification algorithms are needed that can identify and differentiate between fresh fish and non-fresh fish. This research uses a Convolutional Neural Network with DenseNet201, VGG16, and InceptionV3 architecture. The dataset contains 880 Belato Alepes Djedaba fish eye images, with a ratio of 80:15:5 for train, validation, and test. DenseNet201 has the best performance compared to VGG16 and InceptionV3. Accuracy on DenseNet201 test data 98%, InceptionV3 95%, and VGG16 91%. The classification results of the best model using 8 images with various scenarios show that all images were successfully classified 100% correctly. This research makes a contribution to the field of fishery product processing technology which allows fish quality classification to be carried out quickly and accurately, as well as increasing efficiency in ensuring the quality of fish for consumption.
COMPARISON PERFORMANCE OF WORD2VEC, GLOVE, FASTTEXT USING SUPPORT VECTOR MACHINE METHOD FOR SENTIMENT ANALYSIS
Anjani, Margaretha;
Irmanda, Helena Nurramdhani
Jurnal Teknik Informatika (Jutif) Vol. 5 No. 3 (2024): JUTIF Volume 5, Number 3, June 2024
Publisher : Informatika, Universitas Jenderal Soedirman
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.52436/1.jutif.2024.5.3.1366
Spotify is a digital audio service that provides music and podcasts. Reviews received by the application can affect users who will download the application. The unstructured characteristic of review text is a challenge in text processing. To produce a valid sentiment analysis, word embedding is required. The data set that is owned is divided by a ratio of 80:20 for training data and testing data. The method used for feature expansion is Word2Vec, GloVe, and FastText and the method used in classification is Support Vector Machine (SVM). The three word embedding methods were chosen because they can capture semantic, syntactic, and contextual meanings around words when compared to traditional engineering features such as Bag of Word. The best performance evaluation results show that the GloVe model produces the best performance compared to other word embeddings with an accuracy value of 85%, a precision value of 90%, a recall value of 79%, and an f1-score of 85%.
IMPLEMENTATION OF RSA AND AES-128 SUPER ENCRYPTION ON QR-CODE BASED DIGITAL SIGNATURE SCHEMES FOR DOCUMENT LEGALIZATION
Nuraeni, Fitri;
Kurniadi, Dede;
Rahayu, Diva Nuratnika
Jurnal Teknik Informatika (Jutif) Vol. 5 No. 3 (2024): JUTIF Volume 5, Number 3, June 2024
Publisher : Informatika, Universitas Jenderal Soedirman
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.52436/1.jutif.2024.5.3.1426
Maintaining the confidentiality and integrity of electronic documents is essential in the modern digital age. In the contemporary digital world, digital signatures are essential for safeguarding and legalizing electronic documents. The current issue, however, goes beyond digital signatures and instead centers on enhancing security and data integrity. Therefore, RSA and AES-128 super-encryption is required in QR-code-based digital signature techniques for document legalization. This research stage entails constructing a super encryption algorithm, testing it experimentally for security and performance, and designing a digital signature system using RSA and AES-128 super encryption. The results of this research show that the use of RSA and AES super encryption has been proven to have better performance in data security, where the encryption and decryption process time is relatively close to the RSA encryption time, and the comparison of entropy values is better than RSA and AES-128. So, the combination of Super RSA and AES-128 encryption can increase the security level of electronic documents and reduce the risk of hacking. Moreover, the proposed QR-code-based digital signature scheme is also very efficient regarding file size and processing time.
APPLICATION OF PROCEDURAL CONTENT GENERATION SYSTEM IN FORMING DUNGEON LEVEL IN DUNGEON DIVER GAME
Eka Wahyu Hidayat;
Euis Nur Fitriani Dewi;
Insan Saleh Ramadhan
Jurnal Teknik Informatika (Jutif) Vol. 5 No. 3 (2024): JUTIF Volume 5, Number 3, June 2024
Publisher : Informatika, Universitas Jenderal Soedirman
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.52436/1.jutif.2024.5.3.1465
Developers face numerous challenges in game development, one of which is the lack of games replayability due to the limited variety of levels created. The absence of level variety can lead to player boredom. The Procedural Content Generation (PCG) method provides an effective solution to address this challenge. PCG is applied with a focus on the Cellular Automata method by implementing the Von Neumann Neighborhood rule. The objective of this paper is to apply the Procedural Content Generation System method to create levels in game development. The game development process utilizes Luther's MDLC method. Testing is conducted using tiles of 32x32 units and 64x64 units, with three different test parameters: a fill percentage of 25%, 45%, and 65%. Each fill percentage is tested with three different smooth amount parameters of 2, 4, and 6, with a randomly selected seed. Performance testing results indicate that creating dungeon levels with 32x32 and 64x64 tiles yields short and relatively similar times, around 0.08 to 0.3 seconds. Functional testing reveals that a 25% fill percentage results in nearly empty rooms with no footholds, a 45% fill percentage produces levels with space and footholds, while a 65% fill percentage generates small unconnected rooms. Based on these percentages, a 45% fill percentage is considered the most appropriate for creating dungeon levels because it provides suitable space and footholds for players. Implementing PCG in game level creation not only saves time compared to manual level creation but also offers more efficient variations in dungeon shapes and difficulty levels.
OPTIMIZING BUTTERFLY CLASSIFICATION THROUGH TRANSFER LEARNING: FINE-TUNING APPROACH WITH NASNETMOBILE AND MOBILENETV2
Putri, Ni Kadek Devi Adnyaswari;
Luthfiarta, Ardytha;
Putra, Permana Langgeng Wicaksono Ellwid
Jurnal Teknik Informatika (Jutif) Vol. 5 No. 3 (2024): JUTIF Volume 5, Number 3, June 2024
Publisher : Informatika, Universitas Jenderal Soedirman
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.52436/1.jutif.2024.5.3.1583
Butterflies play a significant role in ecosystems, especially as indicators of the state of biological balance. Each butterfly species is distinctly different, although some also show differences with very subtle traits. Etymologists recognize butterfly species through manual taxonomy and image analysis, which is time-consuming and costly. Previous research has tried to use computer vision technology, but it has shortcomings because it uses a small distribution of data, resulting in a lack of programs for recognizing various other types of butterflies. Therefore, this research is made to apply computer vision technology with the application of transfer learning, which can improve pattern recognition on image data without the need to start the training process from scratch. Transfer learning has a main method, which is fine-tuning. Fine-tuning is the process of matching parameter values that match the architecture and freezing certain layers of the architecture. The use of this fine-tuning process causes a significant increase in accuracy. The difference in accuracy results can be seen before and after using the fine-tuning process. Thus, this research focuses on using two Convolutional Neural Network architectures, namely MobileNetV2 and NASNetMobile. Both architectures have satisfactory accuracy in classifying 75 butterfly species by applying the transfer learning method. The results achieved on both architectures using fine-tuning can produce an accuracy of 86% for MobileNetV2, while NASNetMobile has a slight difference in accuracy of 85%.
IMPLEMENTATION OF NATURAL LANGUAGE PROCESSING (NLP) IN CONSUMER SENTIMENT ANALYSIS OF PRODUCT COMMENTS ON THE MARKETPLACE
Alinda Rahmi, Nadya;
Wulan Dari, Rahmatia
Jurnal Teknik Informatika (Jutif) Vol. 5 No. 3 (2024): JUTIF Volume 5, Number 3, June 2024
Publisher : Informatika, Universitas Jenderal Soedirman
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.52436/1.jutif.2024.5.3.1666
Market product reviews are invaluable information if processed carefully. The process of analyzing product reviews is more than just considering star ratings; Comprehensive examination of the overall content of review comments is essential to extracting the nuances of meaning conveyed by the reviewer. The problem currently occurring in analyzing reviews of product purchases in the marketplace is the large number of abbreviations and non-standard language used by commenters, making it difficult for the system to understand. Therefore, a Natural Language Processing (NLP) approach is needed to improve the language in the content of review comments so as to achieve maximum performance in sentiment analysis. This research utilizes the KNN and TF-IDF algorithms, coupled with NLP techniques, to categorize Muslim fashion product reviews into two different groups that is positive and negative. The NLP-enhanced classification achieved 76.92% accuracy, 80.00% precision, and 74.07% recall, surpassing the results obtained without NLP, which had 69.23% accuracy, 80.00% precision, and 64.52 recall. %. Frequently appearing words in reviews serve as a description of collective buyer sentiment regarding the product. Positive reviews indicate customer satisfaction with the quality, speed of delivery, and price of the goods, while negative reviews indicate dissatisfaction with factors such as color differences and differences in the number of items received.
PENETRATION TESTING OF A COMPUTERIZED PSYCHOLOGICAL ASSESSMENT WEBSITE USING SEVEN ATTACK VECTORS FOR CORPORATION WEBSITE SECURITY
J, Rizky Rachman;
Patty, Jonathan Suara
Jurnal Teknik Informatika (Jutif) Vol. 5 No. 3 (2024): JUTIF Volume 5, Number 3, June 2024
Publisher : Informatika, Universitas Jenderal Soedirman
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.52436/1.jutif.2024.5.3.1731
Websites, being dynamic platforms, undergo regular updates and continuous usage. Consequently, methods employed in website attacks evolve in tandem with increased security measures implemented in website systems, aiming to exploit both the website itself and its users. Website systems and features must remain prepared for potential future attacks at all times. To ensure this, penetration testing needed to be done consistently to keep up with security standards. This research aims to prove the various vulnerabilities that can be found from penetration testing in order to create recommendations on what to improve within a website. This research involves black box penetration testing of a computerized psychological testing website, developed by PT Dwi Purwa Teknologi hereinafter referred to as the client. The penetration testing simulated attacks by a foreign entity unfamiliar with the website's structure. The assessment focused on seven attack vectors: SQL injection, RCE, URL manipulation, CSRF, SSRF, XSS, and Broken Authentication and Session. Vulnerabilities resulted from poorly sanitized input forms, leading to SQL injection and RCE risks. Inadequate input validation enabled cross-site scripting attacks, while missing CSRF tokens exposed the website to CSRF threats. The research underscores the importance of penetration testing to identify and address security weaknesses, empowering the client to fortify their website against potential cyber threats.
NAIVE BAYES AND PARTICLE SWARM OPTIMIZATION IN EARLY DETECTION OF CHRONIC KIDNEY DISEASE
Nurdin, Hafis;
Suhardjono, Suhardjono;
Wuryanto, Anus;
Yuliandari, Dewi;
Sugiarto, Hari
Jurnal Teknik Informatika (Jutif) Vol. 5 No. 3 (2024): JUTIF Volume 5, Number 3, June 2024
Publisher : Informatika, Universitas Jenderal Soedirman
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.52436/1.jutif.2024.5.3.1750
Chronic Kidney Disease (CKD) is a global health problem that requires early detection to reduce the risk of complications and disease progression. The Naïve Bayes (NB) algorithm has been proven effective in detecting CKD but its accuracy still varies. The problem with previous research is that it has not fully optimized existing algorithms in terms of accuracy and efficiency. This research aims to develop a more accurate and efficient early detection method for CKD using the NB algorithm and Particle Swarm Optimization (PSO). The NB method is known for its speed and ease of implementation, with global search capabilities and PSO for parameter optimization. Dataset from the UCI repository, which includes data pre-processing, NB implementation, performance evaluation, and enhancement with PSO. The results of NB+PSO show a significant increase in accuracy of 95.75% from 95.00% and Area Under Curve (AUC) value of 0.910% from 0.802% compared to the use of NB alone. The conclusion of this study is that the combination of NB+PSO increases effectiveness in early detection of CKD. This research opens up opportunities for further development in the medical field, especially in improving the diagnostic accuracy of other diseases.