cover
Contact Name
Rahmat Hidayat
Contact Email
mr.rahmat@gmail.com
Phone
-
Journal Mail Official
rahmat@pnp.ac.id
Editorial Address
-
Location
Kota padang,
Sumatera barat
INDONESIA
JOIV : International Journal on Informatics Visualization
ISSN : 25499610     EISSN : 25499904     DOI : -
Core Subject : Science,
JOIV : International Journal on Informatics Visualization is an international peer-reviewed journal dedicated to interchange for the results of high quality research in all aspect of Computer Science, Computer Engineering, Information Technology and Visualization. The journal publishes state-of-art papers in fundamental theory, experiments and simulation, as well as applications, with a systematic proposed method, sufficient review on previous works, expanded discussion and concise conclusion. As our commitment to the advancement of science and technology, the JOIV follows the open access policy that allows the published articles freely available online without any subscription.
Arjuna Subject : -
Articles 1,172 Documents
Development of Programming Log Collection System Requirements Using Interface Requirement Analysis Techniques Park, Huijae; Lee, Wongyu; Kim, Jamee
JOIV : International Journal on Informatics Visualization Vol 6, No 1-2 (2022): Data Visualization, Modeling, and Representation
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.1-2.938

Abstract

As software affects each industry, companies are increasingly trying to recruit talent. Despite the interest and investment of companies, it is difficult to find talents with technical expertise and specialization in a specific field at the same time. As a result, companies have begun to discover talents who can overcome their lack of expertise through talents with high problem-solving skills. Countries worldwide that felt the need to discover competitive talents began to show changes in education for nurturing talents. In particular, the results of the expansion and increase of programming education that cultivate problem-solving ability have begun to be seen. However, programming education is different from existing education, and many learners have difficulties with the introductory process due to the difficult debugging process. In order to analyze the difficulties of introductory learners and support their learning, a system that can collect data from the programming process and analyze behavior types is required. There are several methods for deriving the system requirements, but the interface requirements analysis method was selected in this study. We approached how to process data in the system by deriving the type of data that the system administrator wants to collect. This study laid the foundation for a system that can analyze the programming process of introductory learners by deriving the functional and non-functional requirements required by the data collection system through interface requirements analysis.
Comparison of Apache SparkSQL and Oracle Performance: Case Study of Data Cleansing Process Hidayati, Ilma Nur; Kusumasari, Tien Fabrianti; Hamami, Faqih
JOIV : International Journal on Informatics Visualization Vol 6, No 1-2 (2022): Data Visualization, Modeling, and Representation
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.1-2.928

Abstract

A dataset with good quality is a valuable asset for a company. The data can be processed into information to help companies improve decision-making. However, the data increased more and more over time to decrease data quality. Thus, good data management is important to keep data quality meeting company standards. One of the efforts that can be done is conducting data cleansing to clean data from errors, inaccuracies, duplication, format discrepancies, etc. Apache Spark is an engine that can analyze large amounts of data. Oracle Database is a database management system used to manage databases. Both have their own reliability and can be used to analyze SQL-shaped data. This study compared Spark and Oracle performance based on query processing time. Both were tested on queries used to perform data cleansing of millions of rows of the dataset. The research focuses on finding out Spark and Oracle's performance through quantitative analysis. The results of this study showed that there were differences in query processing times on both tools. Apache Spark is rated better because it has a relatively faster query processing time than Oracle Database. It can be concluded that Oracle is more reliable in storing complex data models than in analyzing large data. For future research, it is suggested to add other comparison aspects such as memory and CPU usage. The researchers can also consider using query optimization techniques to enrich query experiments.
Implementation of Support Vector Regression for Polkadot Cryptocurrency Price Prediction Haryadi, Deny; Hakim, Arif Rahman; Atmaja, Dewi Marini Umi; Yutia, Syifa Nurgaida
JOIV : International Journal on Informatics Visualization Vol 6, No 1-2 (2022): Data Visualization, Modeling, and Representation
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.1-2.945

Abstract

Cryptocurrency investment is an investment instrument that has high risk but also has a greater advantage than other investment instruments. To make a big profit, investors need to analyze cryptocurrency investments to predict the price of the cryptocurrency to be purchased. The highly volatile movement of cryptocurrency prices makes it difficult for investors to predict those prices. Data mining is the process of extracting large amounts of information from data by collecting, using data, the history of data relationship patterns, and relationships in large data sets. Support Vector Regression has the advantage of doing accurate cryptocurrency price predictions and can overcome the problem of overfitting by itself. Polkadot is one of the cryptocurrencies that are often used as investment instruments in the world of cryptocurrencies. Polkadot cryptocurrency price prediction analysis using the Support Vector Regression algorithm has a good predictive accuracy value, including for Polkadot daily closing price data, namely with a radial basis function (RBF) kernel with cost parameters C = 1000 and gamma = 0.001 obtained model accuracy of 90.00% and MAPE of 5.28 while for linear kernels with parameters C = 10 obtained an accuracy of 87.68% with a MAPE value of 6.10. It can be concluded that through parameter tuning, the model formed has an accuracy value and the best MAPE is to use a radial kernel basis function (RBF) with cost parameters C = 1000 and gamma = 0.001. The results show that the Support Vector Regression method is quite good if used for the prediction of Polkadot cryptocurrencies.
A Multi-Agent K-Means Algorithm for Improved Parallel Data Clustering Mohammed Ahmed Jubair; Salama A. Mostafa; Aida Mustapha; Zirawani Baharum; Mohamad Aizi Salamat; Aldo Erianda
JOIV : International Journal on Informatics Visualization Vol 6, No 1-2 (2022): Data Visualization, Modeling, and Representation
Publisher : Politeknik Negeri Padang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.1-2.934

Abstract

Due to the rapid increase in data volumes, clustering algorithms are now finding applications in a variety of fields. However, existing clustering techniques have been deemed unsuccessful in managing large data volumes due to the issues of accuracy and high computational cost. As a result, this work offers a parallel clustering technique based on a combination of the K-means and Multi-Agent System algorithms (MAS). The proposed technique is known as Multi-K-means (MK-means). The main goal is to keep the dataset intact while boosting the accuracy of the clustering procedure. The cluster centers of each partition are calculated, combined, and then clustered. The performance of the suggested method's statistical significance was confirmed using the five datasets that served as testing and assessment methods for the proposed algorithm's efficacy. In terms of performance, the proposed MK-means algorithm is compared to the Clustering-based Genetic Algorithm (CGA), the Adaptive Biogeography Clustering-based Genetic Algorithm (ABCGA), and standard K-means algorithms. The results show that the MK-means algorithm outperforms other algorithms because it works by activating agents separately for clustering processes while each agent considers a separate group of features.
Lightweight Generative Adversarial Network Fundus Image Synthesis Nurhakimah Abd Aziz; Mohd Azman Hanif Sulaiman; Azlee Zabidi; Ihsan Mohd Yassin; Megat Syahirul Amin Megat Ali; Zairi Ismael Rizman
JOIV : International Journal on Informatics Visualization Vol 6, No 1-2 (2022): Data Visualization, Modeling, and Representation
Publisher : Politeknik Negeri Padang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.1-2.924

Abstract

Blindness is a global health problem that affects billions of lives. Recent advancements in Artificial Intelligence (AI), (Deep Learning (DL)) has the intervention potential to address the blindness issue, particularly as an accurate and non-invasive technique for early detection and treatment of Diabetic Retinopathy (DR). DL-based techniques rely on extensive examples to be robust and accurate in capturing the features responsible for representing the data. However, the number of samples required is tremendous for the DL classifier to learn properly. This presents an issue in collecting and categorizing many samples. Therefore, in this paper, we present a lightweight Generative Neural Network (GAN) to synthesize fundus samples to train AI-based systems. The GAN was trained using samples collected from publicly available datasets. The GAN follows the structure of the recent Lightweight GAN (LGAN) architecture. The implementation and results of the LGAN training and image generation are described. Results indicate that the trained network was able to generate realistic high-resolution samples of normal and diseased fundus images accurately as the generated results managed to realistically represent key structures and their placements inside the generated samples, such as the optic disc, blood vessels, exudates, and others. Successful and unsuccessful generation samples were sorted manually, yielding 56.66% realistic results relative to the total generated samples. Rejected generated samples appear to be due to inconsistencies in shape, key structures, placements, and color.
Data-Centric Learning Method for Synthetic Data Augmentation and Object Detection Yeseong Park; Hyeongbok Kim; Yoon Jung Park; Changsin Lee; Jinsuk Lee
JOIV : International Journal on Informatics Visualization Vol 6, No 1-2 (2022): Data Visualization, Modeling, and Representation
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.1-2.939

Abstract

This paper proposes a deep learning framework for decreasing large-scale domain shift problems in object detection using domain adaptation techniques. We have approached data-centric domain adaptation with Image-to-Image translation models for this problem. It is one of the methodologies that changes source data to target domain's style by reducing domain shift. However, the method cannot be applied directly to the domain adaptation task because the existing Image-to-Image model focuses on style translation. We solved this problem using the data-centric approach simply by reordering the training sequence of the domain adaptation model. We defined the features to be content and style. We hypothesized that object-specific information in images was more closely tied to the content than the style and thus experimented with methods to preserve content information before style was learned. We trained the model separately only by altering the training data. Our experiments confirmed that the proposed method improves the performance of the domain adaptation model and increases the effectiveness of using the generated synthetic data for training object detection models. We compared our approach with the existing single-stage method where content and style were trained simultaneously. We argue that our proposed method is more practical for training object detection models than others. The emphasis in this study is to preserve image content while changing the style of the image. In the future, we plan to conduct additional experiments to apply synthetic data generation technology to various other application areas like indoor scenes and bin picking.
A Multi-Agent Simulation Evacuation Model Using The Social Force Model: A Large Room Simulation Study Hussain, Norhaida; Shiang, Cheah Wai; Loke, Seng; Khairuddin, Muhammad Asyraf bin
JOIV : International Journal on Informatics Visualization Vol 6, No 1-2 (2022): Data Visualization, Modeling, and Representation
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.1-2.929

Abstract

Research on evacuation simulation has received significant attention over the past few decades. Disasters, whether they were caused by nature or by humans, which claimed lives were also the impetus for the establishment of various evacuation studies. Numerous research points to the possibility of simulating an evacuation utilizing the Social Force Model (SFM) and a leading person or leader, but without using the multi-agent architecture. Within the scope of this article, the multi-agent architecture for crowd steering that we suggest will be investigated. The architecture will utilize a model known as the Social Force Model to figure out how evacuees will move around the area. After this step, the model is simulated in NetLogo to determine whether the architecture can model the evacuation scenario. A simulation test is carried out for us to investigate the degree to which the behavior of the original SFM and the message-passing model is comparable to one another. The result demonstrates that the proposed architecture can simulate the evacuation of pedestrians. In addition, the simulation model can simulate utilizing the grouping strategy as well as the no grouping technique. The findings also showed that the model can capture many evacuation patterns, such as an arch-shaped pattern at the opening of the exit.
How to Deeply Analyze the Content of Online Newspapers Using Clustering and Correlation Rokhayati, Yeni; Sartikha, -; Janah, Nur Zahrati
JOIV : International Journal on Informatics Visualization Vol 6, No 1-2 (2022): Data Visualization, Modeling, and Representation
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.1-2.942

Abstract

The increase in the number of visitors is one of the keys to increasing income for online newspapers, whether to increase the number of ads, Google AdSense, or customer trust. Therefore, finding which news categories increase the number of visitors needs to be known and analyzed more deeply. Because it is very common to add content to online newspaper sites every day, even for hours, this pattern analysis is not the same as analyzing regular website content patterns. This study intends to add methods in the world of research on how to analyze website content, especially online news, by using the clustering method to classify what news categories bring high, medium, or a low number of visitors and then analyzing the correlation to explore the depth of the relationship between the variables, namely which parameters have a large or low effect on the increase in the number of visitors. A local Batam-based online newspaper company is used as a case study for this research. Data is collected, preprocessed first, and analyzed using the clustering and correlation method. This analysis of the news content readership suggests what news categories should be optimized because it provides an increase in the number of visitors. A summary of the analysis steps in this study is presented. We also provided some suggestions if other online newspaper owners or researchers are interested in a similar analysis of online news content.
An Intelligent Missing Data Imputation Techniques: A Review Seu, Kimseth; Kang, Mi-Sun; Lee, HwaMin
JOIV : International Journal on Informatics Visualization Vol 6, No 1-2 (2022): Data Visualization, Modeling, and Representation
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.1-2.935

Abstract

The incomplete dataset is an unescapable problem in data preprocessing that primarily machine learning algorithms could not employ to train the model. Various data imputation approaches were proposed and challenged each other to resolve this problem. These imputations were established to predict the most appropriate value using different machine learning algorithms with various concepts. Furthermore, accurate estimation of the imputation method is exceptionally critical for some datasets to complete the missing value, especially imputing datasets in medical data. The purpose of this paper is to express the power of the distinguished state-of-the-art benchmarks, which have included the K-nearest Neighbors Imputation (KNNImputer) method, Bayesian Principal Component Analysis (BPCA) Imputation method, Multiple Imputation by Center Equation (MICE) Imputation method, Multiple Imputation with denoising autoencoder neural network (MIDAS) method. These methods have contributed to the achievable resolution to optimize and evaluate the appropriate data points for imputing the missing value. We demonstrate the experiment with all these imputation techniques based on the same four datasets which are collected from the hospital. Both Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) are utilized to measure the outcome of implementation and compare with each other to prove an extremely robust and appropriate method that overcomes missing data problems. As a result of the experiment, the KNNImputer and MICE have performed better than BPCA and MIDAS imputation, and BPCA has performed better than the MIDAS algorithm.
Smart Automation Aquaponics Monitoring System Muhammad Saef Tarqani Abdullah; Lucyantie Mazalan
JOIV : International Journal on Informatics Visualization Vol 6, No 1-2 (2022): Data Visualization, Modeling, and Representation
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.1-2.925

Abstract

Modern agriculture, such as aquaponics, has become a well-known solution nowadays for farming, especially in Asia countries. It provides an alternative to support food demands and maintain environmental sustainability. However, it requires manpower and time to maintain and monitor the system. This research proposes a smart automation aquaponic monitoring system that helps users maintain and monitor the system through smartphone applications. The system uses DHT11 to record temperature and humidity, HC-SR04 for water level, and FC-28 to maintain soil moisture. The sensors are integrated with WeMos D1 Wi-Fi Uno based ESP8266 microcontroller to process the data. The data collected is stored in the cloud and retrieved via the Blynk application, which also performs as an actuator and allows users to control the parameters involved. The application helps to monitor the humidity, temperature, and water level in the fish tank and control the actuator for feeding fish. The system also sends a notification to the user for any activities performed, such as watering plants, feeding fish, and abnormality of temperature in the surroundings. The performance of the system was evaluated using regression modeling. The result indicates positive growth for both plants and fish during the monitoring duration, suggesting the proposed system's effectiveness. Overall, this solution helps to reduce manpower and operation costs as well as alternatives for food demand and stabilize environmental sustainability, especially in the urban residency.

Page 32 of 118 | Total Record : 1172