cover
Contact Name
Rahmat Hidayat
Contact Email
mr.rahmat@gmail.com
Phone
-
Journal Mail Official
rahmat@pnp.ac.id
Editorial Address
-
Location
Kota padang,
Sumatera barat
INDONESIA
JOIV : International Journal on Informatics Visualization
ISSN : 25499610     EISSN : 25499904     DOI : -
Core Subject : Science,
JOIV : International Journal on Informatics Visualization is an international peer-reviewed journal dedicated to interchange for the results of high quality research in all aspect of Computer Science, Computer Engineering, Information Technology and Visualization. The journal publishes state-of-art papers in fundamental theory, experiments and simulation, as well as applications, with a systematic proposed method, sufficient review on previous works, expanded discussion and concise conclusion. As our commitment to the advancement of science and technology, the JOIV follows the open access policy that allows the published articles freely available online without any subscription.
Arjuna Subject : -
Articles 1,172 Documents
Deep Learning Approach for Prediction of Brain Tumor from Small Number of MRI Images Zailan, Zulaikha N.I.; Mostafa, Salama A.; Abdulmaged, Alyaa Idrees; Baharum, Zirawani; Jaber, Mustafa Musa; Hidayat, Rahmat
JOIV : International Journal on Informatics Visualization Vol 6, No 2-2 (2022): A New Frontier in Informatics
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.6.2-2.987

Abstract

Daily, the computer industry has been moving towards machine intelligence. Deep learning is a subfield of artificial intelligence (AI)'s machine learning (ML). It has AI features that mimic the functioning of the human brain in analyzing data and generating patterns for making decisions. Deep learning is gaining much attention nowadays because of its superior precision when trained with large data. This study uses the deep learning approach to predict brain tumors from medical images of magnetic resonance imaging (MRI). This study is conducted based on CRISP-DM methodology using three deep learning algorithms: VGG-16, Inception V3, MobileNet V2, and implemented by the Python platform. The algorithms predict a small number of MRI medical images since the dataset has only 98 image samples of benign and 155 image samples of malignant brain tumors. Subsequently, the main objective of this work is to identify the best deep learning algorithm that performs on small-sized datasets. The performance evaluation results are based on the confusion matrix criteria, accuracy, precision, and recall, among others. Generally, the classification results of the MobileNet-V2 tend to be higher than the other models since its recall value is 86.00%. For Inception-V3, it got the second highest accuracy, 84.00%, and the lowest accuracy is VGG-16 since it got 79.00%. Thus, in this work, we show that DL technology in the medical field can be more advanced and easier to predict brain tumors, even with a small dataset.
Optimal Data Transmission and Improve Efficiency through Machine Learning in Wireless Sensor Networks Park, Hyunjoo; Jeon, Junheon
JOIV : International Journal on Informatics Visualization Vol 6, No 2-2 (2022): A New Frontier in Informatics
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.2-2.1125

Abstract

Each sensor node in WSN is typically equipped with a limited capacity small battery. Energy-efficient communication is therefore considered a key component of network life extension. In addition, as the utilization of the sensor network increases, duplicate data and abnormal data is also collected to reduce the accuracy of the data in various environments. AI is used to recognize data anomaly values and increase packets' accuracy by removing out-of-range data. This can improve performance through optimal data transmission, resulting in increased network life, energy efficiency, and reliability. This paper proposes a protocol called MLQ-MAC that reflects the above. MLQ-MAC uses AI techniques to consider different types of data packets. The data collected by the sensor removes the measurement anomaly and duplicate data and stores it in a different transmission queue by priority. Efficient data transfer is possible by using an AI Discriminator for accurate classification before being stored on a transmission queue. The AI-Discriminator classifies a variety of factors, including the collection environment, characteristics of network applications, and so on. It also uses two new technologies: self-adaptation and scheduling for efficient transmission. In the protocol, the receiver adjusts the duty cycle according to to transmit urgency to improve network QoS. Finally, the simulation results show that the MLQ-MAC protocol reduces energy consumption at the receiver by up to 3.4% and per bit by up to 2.3% and improves packet delivery accuracy by up to 3%.
Intra-frame Based Video Compression Using Deep Convolutional Neural Network (DCNN) Arief Bramanto Wicaksono Putra; Achmad Fanany Onnilita Gaffar; Muhammad Taufiq Sumadi; Lisa Setiawati
JOIV : International Journal on Informatics Visualization Vol 6, No 3 (2022)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.3.1012

Abstract

In principle, a video codec is built by implementing various algorithms and their development. The next generation of codecs involves more artificial intelligence applications and their development. DCNN (Deep Convolutional Neural Network) is a multi-layer NN concept with a deep learning approach in the field of artificial intelligence development. This study has proposed a DCNN with three hidden layers for intra-frame-based video compression. DCT and fractal methods were used to compare the performance of the proposed method.  The training image (obtained from the average of all down-sampled frames) is divided into several square blocks using the square block shift operation until all parts of the image are fulfilled. All pixels in each block act as input data patterns. After the training process, the trained proposed DCNN was then used to construct the feature and sub-feature image obtained through the max function operation in the feature bank and sub-feature bank. These feature and sub-feature images were then a spatial redundancy minimizer with specific manipulation techniques and simultaneously a quantizer without converting the frame's pixels to a bit-stream. The result of this process is a compressed image. Experiments on the entire dataset resulted in AAPR (Average Approximate Performance Ratio) of 147.71%, or an average of 1.5 times better than other methods. For further studies, the performance improvement of the proposed DCNN is performed by modifying its structure so that the output is direct in the form of feature and sub-feature images. Another way is to combine it with the DCT or fractal method to improve the performance of the result.
An Investigation into Indonesian Students' Opinions on Educational Reforms through the Use of Machine Learning and Sentiment Analysis - Sarmini; Abdullah Alhabeeb; Majed Mohammed Abusharhah; Taqwa Hariguna; Andhika Rafi Hananto
JOIV : International Journal on Informatics Visualization Vol 6, No 3 (2022)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.3.894

Abstract

An anti-Covid-19 plan with social restrictions forced all Indonesian educational institutions to implement online learning in 2020. Strategy in early 2022, a new policy brought back online learning methods. Because of the rapid change and short adaptation period, online learning, which had been accepted as a solution for approximately two years, has become controversial. There were a variety of reactions in society, particularly on social media, after the rapid shift from face-to-face learning to online learning. This study will quantify text sentiment expressed on social media through machine learning. This study used SVM, RF, DT, LR, and k-nearest neighbors to develop a sentiment analysis model for use in sentiment research (KNN). The SVM- and RF-based sentiment analysis models outperform the others in cross-validation tests using data from the same Twitter social media site. Furthermore, RF can classify public opinion into three groups: positive, negative, and neutral, with a low error rate. The f1 values of our KNN-based model were measured at 75%, 65%, and 87% for negative, neutral, and positive tweets, respectively, which are slightly more accurate than previous studies with the same method and purpose.
Exploring Extended Configuration of Digital Eco-Dynamic Influence on Small E-Business' Product Innovation Yuniarty, -; Gautama So, Idris; Bramantoro Abdinagoro, Sri; Hamsal, Mohammad
JOIV : International Journal on Informatics Visualization Vol 6, No 2-2 (2022): A New Frontier in Informatics
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.2-2.1145

Abstract

This study comprehensively explores the factors affecting product innovation performance in small e-businesses. The effects of the broader composition of digital eco-dynamics on the performance of product innovation of small businesses are little understood. This study tries to fill in the gaps and investigate the interdependencies above. This study offers the novelty of using RICH as a construct that can improve innovation performance, expanding on a digital eco-dynamic that has not been developed for ten years. Confirmatory factor analysis, descriptive statistics, construct reliability, average variance extracted, and the RMSEA model of fit test was used to analyze data from 300 useable responses. The test reliability and validity of the empirical model were evaluated through linguist reviews and statistically tested with construct reliability coefficients and confirmatory factor analysis. The findings also suggest that IT capability, dynamic capability, environmental uncertainty, and resource induce coping heuristics positively impact product innovation performance in small e-businesses. This research will contribute to developing innovation theory by offering RICH as a solution. The finding that RICH is positively and significantly related to innovation performance is significant for business actors, mainly because it is in the context of developing countries. For entrepreneurs, the findings of this study suggest that developing resources in a manner consistent with the RICH strategy for companies to be more entrepreneurially oriented. In this way, the development and actualization of cognitive resources can reduce uncertainty and lead to resource acquisition and resource protection by entrepreneurs
Enhance Document Contextual Using Attention-LSTM to Eliminate Sparse Data Matrix for E-Commerce Recommender System - Hanafi; Anik Sri Widowati; - Jaeni; Jack Febrian Rusdi
JOIV : International Journal on Informatics Visualization Vol 6, No 3 (2022)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.3.1233

Abstract

E-commerce has been the most important service in the last two decades. E-commerce services influence the growth of the economic impact worldwide. A recommender system is an essential mechanism for calculating product information for e-commerce users. The successfulness of recommender system adoption influences the target revenue of an e-commerce company. Collaborative filtering (CF) is the most popular algorithm for creating a recommender system. CF applied a matrix factorization mechanism to calculate the relationship between user and product using rating variable as intersection value between user and product. However, the number of ratings is very sparse, where the number of ratings is less than 4%. Product Document is the product side information representation. The document aims to advance the effectiveness of matrix factorization performance. This research considers to the enhancement of document context using LSTM with an attention mechanism to capture a contextual understanding of product review and incorporate matrix factorization based on probabilistic matrix factorization (PMF) to produce rating prediction. This study employs a real dataset using MovieLens dataset ML.1M and Amazon information video (AIV) to observe our ATT-PMF model. Movielens dataset represents of number sparse rating that only contains below 4% (ML.1M). Our experiment report shows that ATT-PMF outperforms more than 2% on average than previous work. Moreover, our model is also suitable to implement on huge datasets. For further research, enhancement of product document context will be a good factor in eliminating sparse data problems in big data problems.
Visualization and Analysis of Safe Routes to School based on Risk Index using Student Survey Data for Safe Mobility Jin, Wenquan; Khudoyberdiev, Azimbek; Kim, Dohyeun
JOIV : International Journal on Informatics Visualization Vol 6, No 3 (2022)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.3.1163

Abstract

Risk analysis is important in heterogeneous industrial domains to enable sustainable development. Data is the basis for emphasizing the potential risk elements for improving efficiency, quality, and safety. For supplying safe routes to schools based on risk analysis, the risk assessment of routes is one of the widely used and very effective methodologies to filter the most dangerous roads, intersections, or specific points on roads. This paper presents a visualization and analysis of the risk assessment approach based on the risk index model using geographical information, including routes, danger points, and student survey data. The proposed risk index model is used for deriving a risk index based on geographical information, including danger points and a route's path. The model includes an equation to calculate the distance of danger points to the path using the coordinates of each location. The survey data is mainly comprised of route and survey information that is analyzed and preprocessed for the input data of the risk index model. The survey mainly consists of basic information on the route, survey participants, school route information, and school route coordinates. The data is classified into the school route data set and the school route danger points data set, and these values are applied to the analysis and the risk index model. Also, the risk index model is designed and developed through the analysis of routes.
Predicting Dengue Outbreak based on Meteorological Data Using Artificial Neural Network and Decision Tree Models Muhamad Krishnan, Nor Farisha; Zukarnain, Zuriani Ahmad; Ahmad, Azlin; Jamaludin, Marhainis
JOIV : International Journal on Informatics Visualization Vol 6, No 3 (2022)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.6.3.788

Abstract

Dengue fever is well-known as a potentially fatal disease, and the number of cases in some areas remains uncontrolled. Despite efforts to prevent the dengue outbreak from spreading further, vectors may be to blame. Identifying what weather characteristics contribute to dengue outbreaks is important to predict the dengue outbreak. This study proposes Artificial Neural Network (ANN) and Decision Tree (DT) models based on maximum temperature, minimum temperature, total rainfall, and average humidity to predict the dengue outbreak in Kota Bharu. Different numbers of hidden nodes were used in ANN to optimize the model. Both models, ANN and DT are evaluated based on accuracy, sensitivity and specificity showing that ANN (Accuracy = 68.85%, Sensitivity = 99.71%, Specificity = 1.27%), performed better than DT (Accuracy = 67.46%, Sensitivity = 98.82%, Specificity = 2.53%). This means that ANN outperforms DT when predicting a dengue outbreak in Kota Bharu. Based on the ANN model, it can be concluded that the number of hidden nodes affects the model's accuracy. Selecting the ideal number of hidden nodes for modeling the ANN model is appropriate. Even though ANN accuracy for prediction models is greater than DT, it is still low. It can be inferred that selecting a prediction model appropriate for a variety of dataset types and levels of complexity is important. Based on these models, the government may take pre-emptive actions to enhance public awareness about climate change.
Designing and ERP System: A Sustainability Approach Adriansyah, Aveicena Kemal; Ridwan, Ari Yanuar; Septo Hediyanto, Umar Yunan Kurnia
JOIV : International Journal on Informatics Visualization Vol 6, No 2-2 (2022): A New Frontier in Informatics
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.2-2.1117

Abstract

Technology has grown up massively, which is numerous aspects. Whether the system is limited or open source, had to adapt to certain circumstances. For instance, decades ago, people mainly used to have a paper to work on reporting or to make notes. These days, people started to migrate the work stuff into digitalization. Furthermore, environmental pollution has been triggered by emissions from vehicles. Moreover, due to a lack of knowledge and resources to solve this issue, the manufacturing process does not apply the sustainability process. This matter has led to an impenetrable environmental issue in the manufacturing area for years. Therefore, developing a sales module dashboard system that is configured on Sales Order (SO) transactions and customer data could assist the problem-solving process. The reason for this matter is to highlight the Key Performance Indicator (KPI) at the sustainability that has been added by the open-source Enterprise Resources Planning (ERP) System and display the indicator of data visualization application. Furthermore, the development of the ERP system is aligned with the purpose of the Quickstart methodology that was applied in this research. To sum up, the sales module dashboard system is designed to assist the new user in this implementation in classifying the material, process, or products that require consideration to be maintained.  Moreover, to reduce the number of raw materials or shipment processes that did not apply the eco-friendliness
Enhancing Code Similarity with Augmented Data Filtering and Ensemble Strategies Kim, Gyeongmin; Kim, Minseok; Jo, Jaechoon
JOIV : International Journal on Informatics Visualization Vol 6, No 3 (2022)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.3.1259

Abstract

Although COVID-19 has severely affected the global economy, information technology (IT) employees managed to perform most of their work from home. Telecommuting and remote work have promoted a demand for IT services in various market sectors, including retail, entertainment, education, and healthcare. Consequently, computer and information experts are also in demand. However, producing IT, experts is difficult during a pandemic owing to limitations, such as the reduced enrollment of international students. Therefore, researching increasing software productivity is essential; this study proposes a code similarity determination model that utilizes augmented data filtering and ensemble strategies. This algorithm is the first automated development system for increasing software productivity that addresses the current situation—a worldwide shortage of software dramatically improves performance in various downstream natural language processing tasks (NLP). Unlike general-purpose pre-trained language models (PLMs), CodeBERT and GraphCodeBERT are PLMs that have learned both natural and programming languages. Hence, they are suitable as code similarity determination models. The data filtering process consists of three steps: (1) deduplication of data, (2) deletion of intersection, and (3) an exhaustive search. The best mating (BM) 25 and length normalization of BM25 (BM25L) algorithms were used to construct positive and negative pairs. The performance of the model was evaluated using the 5-fold cross-validation ensemble technique. Experiments demonstrate the effectiveness of the proposed method quantitatively. Moreover, we expect this method to be optimal for increasing software productivity in various NLP tasks.

Page 37 of 118 | Total Record : 1172