cover
Contact Name
Rahmat Hidayat
Contact Email
mr.rahmat@gmail.com
Phone
-
Journal Mail Official
rahmat@pnp.ac.id
Editorial Address
-
Location
Kota padang,
Sumatera barat
INDONESIA
JOIV : International Journal on Informatics Visualization
ISSN : 25499610     EISSN : 25499904     DOI : -
Core Subject : Science,
JOIV : International Journal on Informatics Visualization is an international peer-reviewed journal dedicated to interchange for the results of high quality research in all aspect of Computer Science, Computer Engineering, Information Technology and Visualization. The journal publishes state-of-art papers in fundamental theory, experiments and simulation, as well as applications, with a systematic proposed method, sufficient review on previous works, expanded discussion and concise conclusion. As our commitment to the advancement of science and technology, the JOIV follows the open access policy that allows the published articles freely available online without any subscription.
Arjuna Subject : -
Articles 1,172 Documents
Data Exploration Using Tableau and Principal Component Analysis Parhusip, Hanna Arini; Trihandaru, Suryasatriya; Heriadi, Adrianus Herry; Santosa, Petrus Priyo; Puspasari, Magdalena Dwi
JOIV : International Journal on Informatics Visualization Vol 6, No 4 (2022)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.4.952

Abstract

This study aims to determine the dominant chemical elements that may improve the monitoring of the productivity and efficiency of heavy engines in 2015-2021 in the company. The method used is usually Scheduled Oil Sampling. This article proposes a new approach. The research problems are analyzing the recorded chemical elements that are produced by heavy engines and visualizing them through the Tableau program. The basic design of the study is learning the given data after visualization and using the Principal Component Analysis. This method is to obtain chemical elements that affect engine wear during each engine's use in the 2015-2021 period. Because there are three categories in each element in the oil sample, namely wear metals, contaminants, and oil additives, a technique is needed to obtain these elements using Principal Component Analysis. Therefore, Oil Sampling Analysis through data exploration using Tableau resulted in a new approach to data analysis of elements recorded by heavy vehicles. The main findings as a result of the analysis are given by the visualization of Tableau, in which there are five machines analyzed to obtain the main components that cause engine wear. From the visualization results, it is shown that there is one engine coded MSD 012 that experienced wear and tear in 2018 and 2019. This shows where two main components, Ca and Mg, dominate engine wear. These results have been confirmed with the related companies. The company then carried out further studies on the machine to get special treatment because of these results.
Software Defect Prediction Framework Using Hybrid Software Metric Amirul Zaim; Johanna Ahmad; Noor Hidayah Zakaria; Goh Eg Su; Hidra Amnur
JOIV : International Journal on Informatics Visualization Vol 6, No 4 (2022)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.4.1258

Abstract

Software fault prediction is widely used in the software development industry. Moreover, software development has accelerated significantly during this epidemic. However, the main problem is that most fault prediction models disregard object-oriented metrics, and even academician researcher concentrate on predicting software problems early in the development process. This research highlights a procedure that includes an object-oriented metric to predict the software fault at the class level and feature selection techniques to assess the effectiveness of the machine learning algorithm to predict the software fault. This research aims to assess the effectiveness of software fault prediction using feature selection techniques. In the present work, software metric has been used in defect prediction. Feature selection techniques were included for selecting the best feature from the dataset. The results show that process metric had slightly better accuracy than the code metric.
Evaluating Web Scraping Performance Using XPath, CSS Selector, Regular Expression, and HTML DOM With Multiprocessing Technical Applications Irfan Darmawan; Muhamad Maulana; Rohmat Gunawan; Nur Widiyasono
JOIV : International Journal on Informatics Visualization Vol 6, No 4 (2022)
Publisher : Politeknik Negeri Padang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.4.1525

Abstract

Data collection has become a necessity today, especially since many sources of data on the internet can be used for various needs. The main activity in data collection is collecting quality information that can be analyzed and used to support decisions or provide evidence. The process of retrieving data from the internet is also known as web scraping. There are various methods of web scraping that are commonly used. The amount of data scattered on the internet will be quite time-consuming if the web scraping is done on a large scale. By applying the parallel concept, the multi-processing approach can help complete a job. This study aimed to determine the performance of the web scraping method with the application of multi-processing. Testing is done by doing the process of scraping data from a predetermined target web. Four web scraping methods: CSS Selector, HTML DOM, Regex, and XPath, were selected to be used in the experiment measured based on the parameters of CPU usage, memory usage, execution time, and bandwidth usage. Based on experimental data, the Regex method has the least CPU and memory usage compared to other methods. While XPath requires the least time compared to other methods. The CSS Selector method is the smallest in terms of bandwidth usage compared to other methods. The application of multi-processing techniques to each web scraping method is proven to save memory usage, reduce execution time and reduce bandwidth usage compared to only using single processing.
Classification of Student Graduation using Naïve Bayes by Comparing between Random Oversampling and Feature Selections of Information Gain and Forward Selection Dony Fahrudy; Shofwatul 'Uyun
JOIV : International Journal on Informatics Visualization Vol 6, No 4 (2022)
Publisher : Politeknik Negeri Padang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.4.982

Abstract

Class-imbalanced data with high attribute dimensions in datasets frequently contribute to issues in a classification process as this can affect algorithms’ performance in the computing process because there are imbalanced numbers of data in each class and irrelevant attributes that must be processed; therefore, this needs for some techniques to overcome the class-imbalanced data and feature selection to reduce data complexity and irrelevant features. Therefore, this study applied random oversampling (ROs) method to overcome the class-imbalanced data and two feature selections (information gain and forward selection) compared to determine which feature selection is superior, more effective and more appropriate to apply. The results of feature selection then were used to classify the student graduation by creating a classification model of Naïve Bayes algorithm. This study indicated an increase in the average accuracy of the Naïve Bayes method without the ROs preprocessing and the feature selection (81.83%), with the ROs (83.84%), with information gain with 3 selected features (86.03%) and forward selection with 2 selected features (86.42%); consequently, these led to increasing accuracy of 4.2% from no pre-processing to information gain and 4.59% from no pre-processing to forward selection. Therefore, the best feature selection was the forward selection with 2 selected features (GPA of the 8th semester and the overall GPA), and the ROs and both feature selections were proven to improve the performance of the Naïve Bayes method.
Smart City Architecture Development Framework (SCADEF) Yuli Adam Prasetyo; Ichwan Habibie
JOIV : International Journal on Informatics Visualization Vol 6, No 4 (2022)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.4.1537

Abstract

Smart City is a city that implements the latest technologies, such as big data, IoT, Artificial Intelligence, and other new technologies. Smart City has different system characteristics than other systems. Smart City involves several independent stakeholders, so the development of a smart city needs to be designed with a system analysis system and service-based planning. Smart City Architecture Development Methodology (SCADM) has been defined from the previous research. However, the existing Enterprise Architecture approach has yet to specify the artefact to complete the framework. This study recommends the Smart City Architecture Framework (SCADEF) as a comprehensive Enterprise Architecture Framework to develop Smart City Architecture. The architecture framework produced by SCADEF becomes the proposed architecture framework for realizing Smart City. SCADEF consists of SCADM, Meta-model Smart City Architecture Development Methodology Artefact, and guidelines by the implementation SCADEF. The research uses observation, classification, and construction methodologies in Information System Design Methodology. In addition, this study also tested the framework by implementing it on city objects. This implementation is a practical test tool for the resulting enterprise architecture framework. This study implemented SCADEF in the education and health field at Bandung Smart City. Implementing testing on the implementation of SCADEF is to explain the implementation in Bandung Smart City and ask for an assessment from enterprise architecture experts. The results of the expert assessments were calculated statistically to assess the methodology, artefacts, and uses. The measurement results show that SCADEF can be accepted and used to develop enterprise smart city architecture.
Decentralized Children's Immunization Record Management System for Private Healthcare in Malaysia Using IPFS and Blockchain Hafidzah Halim, Faiqah; Aimuni Md Rashid, Nor; Farahin Mohd Johari, Nur; Amirul Hazim Abdul Rahman, Muhammad
JOIV : International Journal on Informatics Visualization Vol 6, No 4 (2022)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.4.1264

Abstract

In Malaysia, private healthcare providers keep computerized records of vaccination data, including personal information, diagnostic results, and vaccine prescriptions. However, such sensitive information is commonly stored using a centralized storage paradigm which subsequently brings about the issue of maintaining user privacy. Concerning this, unauthorized access to crucial information such as identity details and ailments that a patient is suffering from, as well as the misuse of patients' data and medical reports, are common threats to user's (patient) privacy. To overcome this problem, the researchers suggest leveraging IPFS (Interplanetary File System) and blockchain technology to create a decentralized children's immunization record management system. While respecting patient privacy, the proposed system also allows authorized entities, such as healthcare professionals, and provides easy access to medical data (e.g., doctors and nurses). The proposed decentralized system integrates IPFS, blockchain, and AES cryptography to ensure consistency, integrity, and accessibility. A permission Ethereum blockchain allows hospitals and patients within private healthcare providers to connect. We utilized a combination of symmetric and asymmetric key encryption to provide secure storage and selective records access. The proposed system was analyzed using Wireshark to evaluate the overall system's performance in terms of integrity and accessibility while sharing patient records. This project aims to provide automated system keeper using autonomous agents collaboratively with the role of blockchain for further enhancement.
A Univariate Extreme Value Analysis and Change Point Detection of Monthly Discharge in Kali Kupang, Central Java, Indonesia Herho, Sandy H. S.
JOIV : International Journal on Informatics Visualization Vol 6, No 4 (2022)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.4.953

Abstract

Kali Kupang plays an important role in the life of the people of Pekalongan and its surrounding areas. However, until recently, not many hydrological studies have been carried out in this area. This study presents how Extreme Value Analysis (EVA) can predict future extreme hydrological events and how a dynamic-programming-based change point detection algorithm can detect the abrupt transition in discharge events variability. Using the annual block maxima, we can predict the upper extreme discharge probability from the generalized extreme value distribution (GEVD) that best fits the data by using the Markov Chain Monte Carlo (MCMC) algorithm as a distribution fitting method. Metropolis-Hasting (MH) algorithm with 500 walkers and 2,500 samples for each walker is used to generate random samples from the prior distribution. As a result, this discharge data can be categorized as a Gumbel distribution (  = 6.818,  = 3.456, and  = 0). The recurrence intervals (RI) for this discharge data can be calculated through this distribution. The changepoint location of the annual standard deviation of this discharge data in the mid-1990s is detected by using the pruned exact linear time (PELT) algorithm. Despite some shortcomings, this study can pave the way for using data-driven algorithms, along with more traditional numerical and descriptive approaches, to analyze hydrological time-series data in Indonesia. This is crucial, considering an increasing number of hydro climatological disasters in the future as a consequence of global climate change.
Comparison of Feature Selection Methods for DDoS Attacks on Software Defined Networks using Filter-Based, Wrapper-Based and Embedded-Based Kurniawan, M.T.; Yazid, Setiadi; Sucahyo, Yudho Giri
JOIV : International Journal on Informatics Visualization Vol 6, No 4 (2022)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.4.1476

Abstract

The development of internet technology is growing very rapidly. Moreover, keeping internet users protected from cyberattacks is part of the security challenges. Distributed Denial of Service (DDoS) is a real attack that continues to grow. DDoS attacks have become one of the most difficult attacks to detect and mitigate appropriately. Software Defined Network (SDN) architecture is a novel network management and a new concept of the infrastructure network. A controller is a single point of failure in SDN, which is the most dangerous of various attacks because the attacker can take control of the controller so that it can control all network traffic. Various detection and mitigation methods have been offered, but not many consider the capacity of the SDN controller. In this research, we propose a feature selection method for DDoS attacks. This research aims to select the most important features of DDoS attacks on SDN so that the detection of DDoS on SDN can be lightweight and early. This research uses a dataset [1] generated by a Mininet emulator. The simulation runs for benign TCP, UDP, and ICMP traffic and malicious traffic, which is the collection of TCP SYN attacks, UDP Flood attacks, and ICMP attacks. A total of 23 features are available in the dataset, some are extracted from the switches, and others are calculated. By using three methods, filter-based, wrapper-based, and embedded-based, we get consistent results where the pktcount feature is the highest feature importance of DDoS attacks on SDN.
Classification of Tempeh Maturity Using Decision Tree and Three Texture Features - Istiadi; - Faqih; Aviv Yuniar Rahman; Dean Ariesta Aziz; April Lia Hananto; Sarina Sulaiman; Candra Zonyfar
JOIV : International Journal on Informatics Visualization Vol 6, No 4 (2022)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.4.983

Abstract

Tempe is an average food from Indonesia, eaten in Indonesia. Even today, tempe is around the world, and vegans around the world use tempeh as a meat substitute. This study plans to work on the accuracy of tempe characterization by utilizing the three-element extraction technique and the choice tree arrangement strategy. This research uses a decision tree method with three texture features in its classification. The results obtained indicate that this method has the highest Gabor channel level, including extraction, which is 71% accuracy, the split proportion is 10;90 and the lowest is 60% with parted balance of 90:10. The most important level value of GCLM extraction precision is 86% with a split proportion of 90;10 and the lowest price level and 60% level with a split ratio of 10;90 for Wavelet including the highest extraction rate price is 77%. It can be said that from the extraction of three elements, GLCM is the element extraction with the highest value from Gabor and Wavelet, including extraction at a split proportion of 10:90 by 86%. The test shows the Featured Tree highlight designation. The extraction technique was superior to different strategies for interaction characterization of tempe development quality. In the next research, improve the accuracy performance so that it can reach 100% using the CNN deep learning method. Then you can also add Support Vector Machine (SVM) and Naive Bayes methods based on the GLCM Extraction feature.
Image Prediction of Exact Science and Social Science Learning Content with Convolutional Neural Network - Mambang; Finki Dona Marleny
JOIV : International Journal on Informatics Visualization Vol 6, No 4 (2022)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.4.923

Abstract

Learning content can be identified through text, images, and videos. This study aims to predict the learning content contained on YouTube. The images used are images contained in the learning content of the exact sciences, such as mathematics, and social science fields, such as culture. Prediction of images on learning content is done by creating a model on CNN. The collection of datasets carried out on learning content is found on YouTube. The first assessment was performed with an RMSProp optimizer with a learning rate of 0.001, which is used for all optimizers. Several other optimizers were used in this experiment, such as Adam, Nadam, SGD, Adamax, Adadelta, Adagrad, and Ftrl. The CNN model used in the dataset training process tested the image with multiple optimizers and obtained high accuracy results on RMSprop, Adam, and Adamax. There are still many shortcomings in the experiments we conducted in this study, such as not using the momentum component. The momentum component is carried out to improve the speed and quality of neural networks. We can develop a CNN model using the momentum component to obtain good training results and accuracy in later studies. All optimizers contained in Keras and TensorFlow can be used as a comparison. This study concluded that images of learning content on YouTube could be modeled and classified. Further research can add image variables and a momentum component in the testing of CNN models.

Page 41 of 118 | Total Record : 1172