cover
Contact Name
Rahmat Hidayat
Contact Email
mr.rahmat@gmail.com
Phone
-
Journal Mail Official
rahmat@pnp.ac.id
Editorial Address
-
Location
Kota padang,
Sumatera barat
INDONESIA
JOIV : International Journal on Informatics Visualization
ISSN : 25499610     EISSN : 25499904     DOI : -
Core Subject : Science,
JOIV : International Journal on Informatics Visualization is an international peer-reviewed journal dedicated to interchange for the results of high quality research in all aspect of Computer Science, Computer Engineering, Information Technology and Visualization. The journal publishes state-of-art papers in fundamental theory, experiments and simulation, as well as applications, with a systematic proposed method, sufficient review on previous works, expanded discussion and concise conclusion. As our commitment to the advancement of science and technology, the JOIV follows the open access policy that allows the published articles freely available online without any subscription.
Arjuna Subject : -
Articles 1,172 Documents
Pre-Trained CNN Architecture Analysis for Transformer-Based Indonesian Image Caption Generation Model Rifqi Mulyawan; Andi Sunyoto; Alva Hendi Muhammad Muhammad
JOIV : International Journal on Informatics Visualization Vol 7, No 2 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.2.1387

Abstract

Classification and object recognition in image processing has significantly improved computer vision tasks. The method is often used for visual problems, especially in picture classification utilizing the Convolutional Neural Network (CNN). In the popular state-of-the-art (SOTA) task of generating a caption on an image, the implementation is often used for feature extraction of an image as an encoder. Instead of performing direct classification, these extracted features are sent from the encoder to the decoder section to generate the sequence. So, some CNN layers related to the classification task are not required. This study aims to determine which CNN pre-trained architecture or model performs best in extracting image features using a state-of-the-art Transformer model as its decoder. Unlike the original Transformer’s architecture, we implemented a vector-to-sequence way instead of sequence-to-sequence for the model. Indonesian Flickr8k and Flick30k datasets were used in this research. Evaluations were carried out using several pre-trained architectures, including ResNet18, ResNet34, ResNet50, ResNet101, VGG16, Efficientnet_b0, Efficientnet_b1, and Googlenet. The qualitative model inference results and quantitative evaluation scores were analyzed in this study. The test results show that the ResNet50 architecture can produce stable sequence generation with the highest accuracy value. With some experimentation, finetuning the encoder can significantly increase the model evaluation score. As for future work, further exploration with larger datasets like Flickr30k, MS COCO 14, MS COCO 17, and other image captioning datasets in Indonesian also implementing a new Transformers-based method can be used to get a better Indonesian automatic image captioning model. 
A Survey on Forms of Visualization and Tools Used in Topic Modelling Ruhaila Maskat; Shazlyn Milleana Shaharudin; Deden Witarsyah; Hairulnizam Mahdin
JOIV : International Journal on Informatics Visualization Vol 7, No 2 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.2.1313

Abstract

In this paper, we surveyed recent publications on topic modeling and analyzed the forms of visualizations and tools used. Expectedly, this information will help Natural Language Processing (NLP) researchers to make better decisions about which types of visualization are appropriate for them and which tools can help them. This could also spark further development of existing visualizations or the emergence of new visualizations if a gap is present. Topic modeling is an NLP technique used to identify topics hidden in a collection of documents. Visualizing these topics permits a faster understanding of the underlying subject matter in terms of its domain. This survey covered publications from 2017 to early 2022. The PRISMA methodology was used to review the publications. One hundred articles were collected, and 42 were found eligible for this study after filtration. Two research questions were formulated. The first question asks, "What are the different forms of visualizations used to display the result of topic modeling?" and the second question is "What visualization software or API is used? From our results, we discovered that different forms of visualizations meet different purposes of their display. We categorized them as maps, networks, evolution-based charts, and others. We also discovered that LDAvis is the most frequently used software/API, followed by the R language packages and D3.js. The primary limitation of this survey is it is not exhaustive. Hence, some eligible publications may not be included.
Utilization of Business Analytics by SMEs In Halal Supply Chain Management Transactions Suziyanti Marjudi; Roziyani Setik; Raja Mohd Tariqi Raja Lope Ahmad; Wan Azlan Wan Hassan; Aza Azlina Md Kassim
JOIV : International Journal on Informatics Visualization Vol 7, No 2 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.2.1308

Abstract

Halal supply chain management has transformed beyond food and beverage certification. However, extant literature shows that Halal transaction management still has much to improve in terms of transaction permissibility, with the main gap in understanding Halal businesses and their transactions limited to a system that separately defines e-commerce and financial technology data into its IT business environment. This study aims to demonstrate the usefulness of managing Halal transactions and its permissibility analysis through a proposed Halal Supply Chain Management Transactions (HSCMT) model and prototype by applying a business analytic approach to integrate both e-commerce and financial technology data. The study uses literature analysis to ensure the correct structure of the integrated datasets, before modeling the transaction's permissibility and prototyping its analytics into decision-making analytics. The developed HSCMT prototype uses a payment gateway that can be embedded into a Halal SME owners' e-commerce site. This creates a holistic Halal Financial technology (FinTech) transaction permissibility dashboard, increasing the effectiveness of HSCMT for Malaysia Halal SME Owners (MHSO) by an average usability score of 83.67%. Results also indicate that the key basic mechanisms to verify transactional permissibility are the source of the transaction, the use of the transaction, transaction flow, and transaction agreement. Furthermore, its mechanisms must be mapped onto a submodule post-transformation and modeling of the transaction dataset. Further improvements in multisource data points can be further considered, as this research only focuses on local data points from one payment gateway service. This is due to restrictions in data policy when involving overseas supply chain and transaction documentation. This research utilizes available data in business through data management, optimization, mining, and visualization to measure performance and drive a company's growth. The competency of business analytics can be beneficial to Halal SMEs players because it can provide them with insights into the permissibility decision-making process.
Value-based modeling and simulation for sustainable ICT4D Omotola, Akindoyo Oluwatosinloba; Waishiang, Cheah; Khairuddin, Muhammad Asyraf bin; Jalil, Nurfauza binti; Phang, Eaqerzilla
JOIV : International Journal on Informatics Visualization Vol 7, No 2 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.2.1314

Abstract

ICT4D is an acronym for information and communication technologies for development. This acronym represents the many initiatives that are now being carried out in underdeveloped countries all around the globe. These technology-oriented initiatives aim to promote the growth and prosperity of the regions they serve. People from the local community often volunteer their time to serve as program managers when NGOs or governments give funding. The term "sustainability" describes an information and communications technology's capacity to carry out a development project even after the initial funding has run out. The e3value technique will be broken down in this article so that we may better understand how it works. Using various tools for value-based modeling, this method is utilized to create an e-Business model. However, one of the main problems with e-business technologies is that they often get insufficient data to be effective. This study aims to examine the procedures that must be followed to learn more about an e-Business concept and mold it into a form that would allow for its implementation in a technologically and commercially practical manner. In order to gather information on the Long Lamai community's economic activities, a survey is being used in this research. The net value sheet for each of their businesses is then stimulated utilizing the data using E3 value. Initial findings indicate that 5 out of 13 enterprises' net incomes are negative, and each of those five businesses has its roots in a local enterprise. However, the figure does not consider labor costs, taxes, or long-term investments made with government money. We continued the stimulation with sensitivity analysis and scenario-based analysis, which results in total revenue of 1,410,000 for the five enterprises in the fourth year after accounting for taxes and fees in the sensitivity analysis. This opens the door for further research into how discrepancies in wealth and income may arise in the simulation of an economy due to the right use of ICT for long-term development and prosperity. Profitability sheets may be created if certain business model assumptions are specified, such as the monetary value of the commodities produced, distributed, and consumed. These may be used to decide if the project can become profitable for all parties concerned.
Multipath Routing Implementation in SD-IoT Network Using OpenFlow-based Routing Metrics Atthariq, Muhammad Daffa; Hidayat, Rizky Fauzi Ari; Sadida, Medina Kaulan; Syafa'ah, Lailis; Sumadi, Fauzi Dwi Setiawan
JOIV : International Journal on Informatics Visualization Vol 7, No 2 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.2.1691

Abstract

The implementation growth of the Internet of Things (IoT) may increase the complexity of the data transmission process between smart devices. The route generation process between available nodes on the network will burden the intermediary node. One of the possible solutions for resolving the problem is the integration of Software Defined Networks and IoT (SD-IoT) to provide network automation and management. The separation of networking control and data forwarding functions may provide a multipath delivery path between each node in the IoT environment. In addition, the controller can directly extract the resource usage of the intermediary devices, which can be utilized as the routing metric variable in order to maintain the resource utilization on the intermediary devices. Instead of using traditional routing, this paper aims to develop multipath routing based on Deep First Search (DFS) and Dijkstra algorithms for acquiring an efficient path using OpenFlow-based routing metrics. The traffic monitoring module delivered the metrics extraction process, which obtained the variables using Port and Aggregate Flow Statistic features. The metrics calculation aimed to provide the multipath, which was constructed based on switches resource usage. Each selected path was chosen based on the smallest cost and probability provided by the group table feature in OpenFlow. The results showed that the Dijkstra algorithm could create the multipath more swiftly than DFS with a time difference of 0.6 s. The Quality of Service (QoS) results also indicated that the proposed routing metric variables could maintain the transmission process efficiently.
(PANDEMIC Covid-19): A Shooter Game for Education - the Impact Measurement of War Games on Virus Eradication Lessons for Students Wibowo, Angga Wahyu; Karima, Aisyatul; Thohari, Afandi Nur Aziz; Santoso, Kuwat; Sato-Shimokawara, Eri
JOIV : International Journal on Informatics Visualization Vol 7, No 2 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.2.1167

Abstract

(PANDEMIC Covid-19) is an educational shooter game inspired by the Covid-19 pandemic which occurred from the end of 2019 until early 2022. There are 2 game modes, namely Third-Person Shooter, or TPS, and First-Person Shooter, or FPS. This study was carried out to highlight the absence of a shooter genre game used in the student learning process. The research methodology in the development of this game applied the pressman method, and the stages include planning, analysis, game development and artificial intelligence, implementation, as well as  evaluation. Furthermore, the testing phase used software testing techniques based on the ISO 9126 standard and involved a total of 100 participants. The age range was between 17 and 20 years, while the participants' gender percentages were 55% male and 45% female. Some of the factors tested include functionality, reliability, portability, usability, efficiency, and maintainability. There were 2 choices only in this test, i.e. agree and disagree. The functionality factor had an agreed rate of 85%; reliability 79%, portability 86%, usability 83%, efficiency 79%, and maintainability 87%. Therefore, it was concluded that this game is suitable for use in student learning in the shooter genre. Furthermore, this research was inspired because shooter games have not been developed for the student learning process. This game genre is currently used for hobbies and for profit by developers and professional players. Further research should develop game levels, enable features to play online together with other users, and should be extended to Android and IOS. 
A Rule-based Mobile Application for Diagnosing Pet Disease: Design and Implementation Ling Li Ng; Hanayanti Hafit; Ruhaya Ab. Aziz; Nur Liesa Mohammad Azemi; Siti Hawa Anurddin
JOIV : International Journal on Informatics Visualization Vol 7, No 2 (2023)
Publisher : Politeknik Negeri Padang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.2.1325

Abstract

Animals kept in homes for personal enjoyment rather than for work or sustenance are typically referred to as "pets." A pet's daily schedule can include exercising its muscles and going outside to relieve stress. Pets may occasionally be drink from community water dishes that could be contaminated with other animals' bacteria, viruses, or parasites. The pets may unknowingly get infections due to this opening up their bodies to bacteria or viruses. Pet's behavior and condition need to be periodically checked. An animal's behavior is directly impacted by its health and vice versa. A pet disease diagnosis application is crucial for pet owners to receive consistent and suitable pet health care. It will help pet owners identify potential illnesses before their animals develop chronic ones. Thus, the construction of a mobile application for diagnosing pet diseases is presented in this paper. This application offers pet owners information on their animals' health and safety. Pet owners can contact veterinarians for rare cases or crises in this application's chat room. The rule-based inference is used to determine the possible diseases based on the pet's symptoms. System prototyping methodology is applied to develop this Android mobile application using Visual Studio Code and Firebase database. User acceptance testing is performed on the users to test how much further their satisfaction with the proposed pet disease diagnosis application is before the application is shifted to the production process.  
Implementation of Convolutional Neural Network and Long Short-Term Memory Algorithms in Human Activity Recognition Based on Visual Processing Video Rachman, Andi Nur; Mubarok, Husni; Fitriani Dewi, Euis Nur; Edwinda Putra, Rama
JOIV : International Journal on Informatics Visualization Vol 7, No 2 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.2.1504

Abstract

Human Activity Recognition (HAR) is an interesting research topic, especially in identifying human movement actions focusing on video-based security surveillance. Symptom of an illness from a movement. The use of HAR in this research is the key to better understanding the various semantics contained in the video to find out the pattern of a human movement, especially in sports movements. In this study, a combination of the CNN and LSTM method algorithms was applied by using several variations of the model parameter values on the dropout layer and batch size to convert the pattern in the video into image form to produce a HAR model. Data processing at the convolution layer is used to extract spatial features in the frame. The extraction results are fed to the LSTM layer on each network for modeling the temporal sequence of human movement. In this way, the network on the model will learn spatiotemporal features directly in end-to-end data training tests to produce a robust model. The test data used are 10 sports activities obtained from related research from the University of Central Florida (UCF). The results showed that the performance was quite good, although there were still errors in the classification of sports activities because they had similarities in the movements of the activities carried out. The classification results show a loss value of 0.4 and an accuracy of 0.94. In further research, what needs to be corrected is the loss value which is still high so that several times the test results show an error in the classification of sports activities that have similarities in the movements of the activities.
Text Classification Using Genetic Programming with Implementation of Map Reduce and Scraping Wedashwara, Wirarama; Irmawati, Budi; Wijayanto, Heri; Arimbawa, I Wayan Agus; Widartha, Vandha Pradwiyasma
JOIV : International Journal on Informatics Visualization Vol 7, No 2 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.2.1813

Abstract

Classification of text documents on online media is a big data problem and requires automation. Text classification accuracy can decrease if there are many ambiguous terms between classes. Hadoop Map Reduce is a parallel processing framework for big data that has been widely used for text processing on big data. The study presented text classification using genetic programming by pre-processing text using Hadoop map-reduce and collecting data using web scraping. Genetic programming is used to perform association rule mining (ARM) before text classification to analyze big data patterns. The data used are articles from science-direct with the three keywords. This study aims to perform text classification with ARM-based data pattern analysis and data collection system through web-scraping, pre-processing using map-reduce, and text classification using genetic programming. Through web scraping, data has been collected by reducing duplicates as much as 17718. Map-reduce has tokenized and stopped-word removal with 36639 terms with 5189 unique terms and 31450 common terms. Evaluation of ARM with different amounts of multi-tree data can produce more and longer rules and better support. The multi-tree also produces more specific rules and better ARM performance than a single tree. Text classification evaluation shows that a single tree produces better accuracy (0.7042) than a decision tree (0.6892), and the lowest is a multi-tree(0.6754). The evaluation also shows that the ARM results are not in line with the classification results, where a multi-tree shows the best result (0.3904) from the decision tree (0.3588), and the lowest is a single tree (0.356).
Inversed Control Parameter in Whale Optimization Algorithm and Grey Wolf Optimizer for Wrapper-based Feature Selection: A comparative study Yab, Li Yu; Wahid, Noorhaniza; A Hamid, Rahayu
JOIV : International Journal on Informatics Visualization Vol 7, No 2 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.2.1509

Abstract

Whale Optimization Algorithm (WOA) and Grey Wolf Optimizer (GWO) are well-perform metaheuristic algorithms used by various researchers in solving feature selection problems. Yet, the slow convergence speed issue in Whale Optimization Algorithm and Grey Wolf Optimizer could demote the performance of feature selection and classification accuracy. Therefore, to overcome this issue, a modified WOA (mWOA) and modified GWO (mGWO) for wrapper-based feature selection were proposed in this study. The proposed mWOA and mGWO were given a new inversed control parameter which was expected to enable more search area for the search agents in the early phase of the algorithms and resulted in a faster convergence speed. The objective of this comparative study is to investigate and compare the effectiveness of the inversed control parameter in the proposed methods against the original algorithms in terms of the number of selected features and the classification accuracy. The proposed methods were implemented in MATLAB where 12 datasets with different dimensionality from the UCI repository were used. kNN was chosen as the classifier to evaluate the classification accuracy of the selected features. Based on the experimental results, mGWO did not show significant improvements in feature reduction and maintained similar accuracy as the original GWO. On the contrary, mWOA outperformed the original WOA in terms of the two criteria mentioned even on high-dimensional datasets. Evaluating the execution time of the proposed methods, utilizing different classifiers, and hybridizing proposed methods with other metaheuristic algorithms to solve feature selection problems would be future works worth exploring.

Page 46 of 118 | Total Record : 1172