cover
Contact Name
Rahmat Hidayat
Contact Email
mr.rahmat@gmail.com
Phone
-
Journal Mail Official
rahmat@pnp.ac.id
Editorial Address
-
Location
Kota padang,
Sumatera barat
INDONESIA
JOIV : International Journal on Informatics Visualization
ISSN : 25499610     EISSN : 25499904     DOI : -
Core Subject : Science,
JOIV : International Journal on Informatics Visualization is an international peer-reviewed journal dedicated to interchange for the results of high quality research in all aspect of Computer Science, Computer Engineering, Information Technology and Visualization. The journal publishes state-of-art papers in fundamental theory, experiments and simulation, as well as applications, with a systematic proposed method, sufficient review on previous works, expanded discussion and concise conclusion. As our commitment to the advancement of science and technology, the JOIV follows the open access policy that allows the published articles freely available online without any subscription.
Arjuna Subject : -
Articles 1,172 Documents
Music Recommendation Based on Facial Expression Using Deep Learning Kurniawan, -; Kurniawan, Tri Basuki; Dewi, Deshinta Arrova; Zakaria, Mohd Zaki; Saringat, Zainuri; Firosha, Ardian
JOIV : International Journal on Informatics Visualization Vol 9, No 1 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.1.3794

Abstract

Music's profound impact on human emotions is essential for creating personalized experiences in entertainment and therapeutic settings. This study introduces a cutting-edge music recommendation system that utilizes facial expression analysis to tailor music suggestions according to the user's emotional state. Our approach integrates a haar-cascade classifier for real-time face detection with a Convolutional Neural Network (CNN) that classifies emotions into seven distinct categories: happiness, sadness, anger, fear, disgust, surprise, and neutrality. This emotionally aware system recommends music tracks corresponding to the user's current emotional condition to enhance mood regulation and overall listener satisfaction. The effectiveness of our system was evaluated through rigorous testing, where the CNN model demonstrated a high degree of accuracy. Notably, the model achieved an overall accuracy of 84.44% in recognizing facial expressions. Precision, recall, and F1 scores consistently exceeded 84%, indicating robust performance across diverse emotional states. These results underscore the system's capability to accurately interpret and respond to complex emotional cues through tailored music suggestions. Integrating advanced deep learning techniques for face and emotion recognition enables our recommendation system to adapt dynamically to the user's emotional fluctuations. This responsiveness ensures a highly personalized music listening experience that reflects the user's feelings and potentially enhances their emotional well-being. By bridging the gap between static user profiles and the dynamic nature of human emotions, our system sets a new standard for personalized technology in music recommendation, promising significant improvements in user engagement and satisfaction.
Overview of Software Re-Engineering Concepts, Models and Approaches Lim, Fung Ji; Sian, Tan Bee
JOIV : International Journal on Informatics Visualization Vol 9, No 1 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.1.3034

Abstract

 Legacy systems face issues such as integrating new technology, fulfilling new requirements in the ever-changing environment, and meeting new user expectations. Due to the old complex system structure and technology, modification is hardly applied. Therefore, re-engineering is needed to change the system to meet new requirements and adapt to new technology. Software re-engineering generally refers to creating a new system from the existing one. Software re-engineering is divided into three (3) main phases: reverse engineering alteration and forward engineering. Reverse engineering examines, analyzes, and understands the legacy system in deriving the abstract representation of a legacy system; then, through necessary alterations such as restructuring, recording, and a series of forward engineering processes, a new system is built. This paper introduces the concepts of software re-engineering, including the challenges, benefits, and motivation for re-engineering. In addition, beginning with the traditional model of software re-engineering, this paper provides an overview of other models that provide different processes of software re-engineering. Each model has its unique set of processes for performing software re-engineering. Furthermore, re-engineering approaches show various ways of performing software re-engineering. Software re-engineering is a complex process that requires knowledge, tools, and techniques from different areas such as software design, programming, testing, et cetera. Therefore, monitoring the re-engineering process to meet the expectations is necessary.
Content and Network Feature in Attention-based Neural Network for Stance Detection on COVID-19 Vaccination Tweets Bimantara, I Made Satria; Irdayanti, Marina; Nisa, Chilyatun; Purwitasari, Diana
JOIV : International Journal on Informatics Visualization Vol 9, No 1 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.1.2671

Abstract

Stance detection in COVID-19 vaccination utilizing tweets is crucial for several reasons, such as public health communication, monitoring vaccine sentiment, and identifying misinformation. This research aims to explore the use of attention-based neural networks for stance detection in Indonesian COVID-19 vaccination tweets. The research focuses on enhancing accuracy by integrating content and network features. The content features represent the tweet's text, while network features define the user account's following or unfollowing. The primary contribution of this research is the development of an Attention Long Short-Term Memory (AttLSTM) model for stance detection in Indonesian tweets related to the COVID-19 vaccination. This model combines content and network features to improve accuracy in classifying user attitudes. We also highlight the performance differences between Word2Vec and FastText for numerical text representation in the AttLSTM model. The research used the Indonesian COVID-19 vaccination-related tweet dataset from prior research. The dataset is extracted using user metadata to obtain content and network features necessary to represent users' interest in tweets. Our research method involves data preparation, preprocessing, extraction of content and network features, and the development of an AttLSTM model. By integrating content and network features into the AttLSTM model with Word2Vec text representation, the study demonstrates superior performance compared to the LSTM baseline model and FastText. Adding attention mechanisms to the baseline LSTM model can capture crucial information, such as the minority class inside a tweet's text. Future research will involve exploring advanced data processing methods and ensemble learning techniques to further improve the model's performance.
A Data Pipeline Concept for Digitizing Services in Small and Medium-Sized Companies Chikhalkar, Akshay; Brünninghaus, Marc; Deppe, Sahar; Bicker, Eckard; Röcker, Carsten
JOIV : International Journal on Informatics Visualization Vol 9, No 1 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.1.3796

Abstract

Small and medium-sized enterprises face significant challenges in their digital transformation due to their limited resources compared to larger companies. In order to overcome these issues, this study proposes the idea of a data pipeline that is affordable and accessible for small and medium-sized enterprises. The suggested method conceptualizes an Extract, Transform and Load (ETL) procedure, which is a go-to approach for data engineering using open-source technologies. A case study of a mobile assistance system is used to illustrate this data flow and emphasizes its numerous advantages and practical uses. Small and medium-sized enterprises can use this data pipeline as a jumping-off point to create a cost-effective, efficient, and scalable data infrastructure. Because the pipeline’s components are modular and completely independent of one another, it is simple to expand, modify, or use individually to meet specific business needs. A basic dashboard prototype that can be modified for different applications is created to show the concept’s viability. Although pipeline design is provided by the concept, its successful execution necessitates technical know-how. To handle resource constraints and data anomalies, this research highlights the necessity of standardized procedures and careful tool selection. The data pipeline’s output may eventually be utilized for sophisticated analytical functions, giving small and medium-sized enterprises the competitive edge they need in the digital era by enabling them with data-driven solutions.
Leveraging ESRGAN for High-Quality Retrieval of Low-Resolution Batik Pattern Datasets Azhar, Yufis; Marthasari, Gita Indah; Regata Akbi, Denar; Minarno, Agus Eko; Haqim, Gilang Nuril
JOIV : International Journal on Informatics Visualization Vol 9, No 2 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.2.3202

Abstract

As one of the world's cultural heritages in Indonesia, batik is one of the quite interesting research subjects, including in the realm of image retrieval. One of the inhibiting factors in searching for batik images relevant to the query image input by the user is the low resolution of the batik images in the dataset. This can affect the dataset's quality, which automatically also impacts the model's performance in recognizing batik motifs with complex details and textures. To address this problem, this study proposes using the Enhanced Super-Resolution Generative Adversarial Network (ESRGAN) method to increase the resolution of batik images. By increasing the resolution, it is hoped that ESRGAN can clarify the details and textures of the initial low-resolution image so that these features can be extracted better. This study proves that ESRGAN can produce high-resolution batik images while maintaining the details of the batik motif itself. The resulting image's high PSNR and low MSE values confirm this. The implementation of ESRGAN has also been proven to improve the performance of the image retrieval system with an increase in precision and average precision values between 1-5% compared to other methods that do not implement it.
Chatbot Adoption Model in Determining Student Career Path Development: Pilot Study Ahmed, Mohamed Hassan; Abdullah, Rusli; Jusoh, Yusmadi Yah; Azmi Murad, Masrah Azrifah
JOIV : International Journal on Informatics Visualization Vol 9, No 1 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.1.3798

Abstract

A career decision is incredibly essential in one's life. It shapes one's future role in society, influences professional development, and can lead to success and fulfillment. Making a sound and consistent career decision based on skills and interests is critical for personal and professional development. Since generative AI is an emerging and revolutionizing technology industry in the market, which is very good in generating contents, providing consultancies and answering questions in humanly fashion, integrating AI chatbots into the career planning process can help students to get more accurate and personalized advice for their future career. This pilot study emphasized the student’s adoption of chatbot technology for career selecting processes utilizing the extended Unified Theory of Acceptance and Use of Technology (UTAUT2) model with four additional constructs which influence the student’s career selection, namely: Perceived Student’s External Factors (PEF), Perceived Student’s Interest (PSN), Perceived Career Opportunities (PCO) and Perceived Self-Efficacy (PSF). An online survey was conducted, and 37 responses were received and analyzed. The measurement model produced a promising result, and the discriminant validity, construct reliability and validity of the model were confirmed with a Cronbach’s alpha (α) above 0.70 threshold and AVE over 0.5 cut-off for most of the constructs including the four above mentioned latent variables. However, the Price Value (PPV) and Facilitating Conditions (PFC) UTAUT2 constructs produced alpha () of 0.680 and 0.611 respectively which is still adequate since their AVE is above the 0.5 threshold. Consequently, their interpretation and conclusions should be approached with caution.
Optimising iCadet Assignment through User Profiling Fei, Yap Peak; Ting, Choo-Yee; Abdul-Rashid, Hairul A.
JOIV : International Journal on Informatics Visualization Vol 9, No 1 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.1.3470

Abstract

Industry Cadetship programme is a programme that assigns penultimate year students to companies matching their profiles, bridging academic learning and industry skills.  Manual data analysis for assignments is time-intensive, prompting this study’s objectives: (i) propose an algorithm to optimize student-company assignment by using the student and company profiles, (ii) propose a method for the assignment of lecturers to company, and (iii) use similarity measure techniques to recommend companies with similar characteristics. Data was collected from a university's student, company, and lecturer datasets. To assign students to companies, the Haversine, OpenStreetMap, and NetworkX were used to calculate the shortest geographical distance between the students and the companies; evaluated based on mean, variance, standard deviation, and utilization rate. For the lecturer assignment, cosine similarity was applied to measure the similarity between domain descriptions and company or lecturer information after performing Voyage AI embeddings. Lecturers are assigned to companies based on the highest domain similarity scores. The performance was evaluated using accuracy, precision, recall, and F1- score.  Findings showed embedding techniques significantly enhanced the matching process, with accuracy improved from 0.464 to 0.6071, precision increased from 0.417 to 0.5058, recall saw an equal rise from 0.464 to 0.6071, and the F1-score advanced from 0.417 to 0.5264. Longer descriptive inputs further improved performance, with accuracy rising from 0.6154 to 0.7692, precision from 0.5744 to 0.7751, recall remaining steady at 0.7692, and F1-score increasing from 0.5807 to 0.7484. This work can be extended to explore job portal dataset by aligning profiles with geography and specialization.
YoloV8, EfficientNetv2, and CSP Darknet Comparison as Recognition Model’s Backbone for Drone-Captured Images Kridalukmana, Rinta; Eridani, Dania; Septiana, Risma; Windasari, Ike Pertiwi
JOIV : International Journal on Informatics Visualization Vol 9, No 2 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.2.2880

Abstract

Artificial intelligence (AI) has recently empowered drones to support smart city apps and recognize on-the-ground objects or events. Various pre-trained backbones are available to develop object recognition models, and some of them could boost the models’ accuracy. Consequently, it becomes difficult for practitioners to select a suitable backbone as a feature extractor during recognition model development. Hence, this research aims to provide a benchmark examining the performance of three popular backbones in supporting recognition models using images captured by drones as the dataset. This research used the UAV-AUAIR dataset and compared three deep learning backbone architectures as the feature extractor, namely YoloV8_s, EfficientNetv2_s, and CSP_DarkNet_l. The head part of each selected backbone was replaced with YoloV8Detector architecture, provided by Keras-CV, to perform the inference tasks. The models generated during training were evaluated against four measurement methods: loss function, intersection over union (IOU), across-scale mean average precision (mAP), and computational performance. The results showed that the model generated using EfficientNetv2_s backbone outperformed the others in most criteria, except the computational performance and detecting small objects, which was won by YOLOV8_s and CSP_Darknet_l, respectively. Thus, EfficientNetv2_s and CSP_DarkNet_l can be considered if app development concerns accuracy. Meanwhile, YoloV8_s is far better when computational performance is essential, as its prediction time achieved 0.8 seconds per image. This study is essential as a reference for practitioners, particularly those who want to develop an object-recognition model based on a pre-trained backbone.
A Deep Learning Approach Using VGG16 to Classify Beef and Pork Images Zulfikar, Wildan Budiawan; Angelyna, Angelyna; Irfan, Mohamad; Atmadja, Aldy Rialdy; Jumadi, Jumadi
JOIV : International Journal on Informatics Visualization Vol 9, No 2 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.2.2848

Abstract

There are 87.2% of the Muslim population in Indonesia, which makes Indonesia one of the countries with the largest Muslim population in the world. As a Muslim, it is supposed to carry out and stay away from the commands that Allah SWT commands, one of which is in QS. Al-maidah:3, one of the commands in the verse is not to consume haram food such as pork. Even so, it turns out that many traders in Indonesia still cheat to get more significant profits, namely by counterfeiting beef and pork. The lack of public knowledge supports this situation to differentiate between the two types of meat. Therefore, the classification process is used to distinguish the two kinds of meat using the convolutional neural network approach with VGG16 with several preprocessing stages. Two primary stages are used during the preprocessing stage: scaling and contrast enhancement. The VGG16 algorithm gets very good results by getting an accuracy value of 99.6% of the test results using 4,500 images for training data and 500 images for testing data. To compare the effectiveness of these techniques, it is recommended to use alternative CNN architectures, such as mobilNet, ResNet, and GoogleNet. More investigation is also required to gather more varied datasets, enabling the ultimate goal to achieve the best possible categorization, even when using cell phone cameras or with dim or fuzzy photos.
Attendance System Leveraging Haar Cascade Detection And CNN-Based Facenet Recognition Technology Syarif, Muhammad Adib; Gunawan, Wawan
JOIV : International Journal on Informatics Visualization Vol 9, No 2 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.2.2464

Abstract

The objective of this research is to investigate face identification methods in the context of employee recognition as a solution to the problem of attendance that still uses manual methods or applications without identity validation. The main goal is to achieve optimal accuracy and consistency in the identification process using Convolutional Neural Networks (CNN) with FaceNet and Haar Cascade. This research focuses on the challenge of managing employee attendance, particularly for those who are working remotely, which can be vulnerable to fraudulent activity. The proposed solution combines facial recognition to enhance identity verification, attendance tracking, and assist companies in achieving their goals. The study employed a dataset of 1,050 employee face data and divided it into three scenarios for training and testing ratios: the first scenario (80:20), the second scenario (70:30), and the third scenario (60:40). The results indicate that the model in the first scenario had the highest accuracy value of 98% and outperformed the models in the second and third scenarios in terms of precision, recall, and f1-score, with values of 98.60%, 98.70%, and 98.60%, respectively. The results indicate that the model used in the first scenario is the most effective in classifying predicted cases and consistently predicting employee identification. Based on these findings, we recommend implementing suggestions such as adding datasets and analyzing important classes to improve the accuracy and generalization of face identification models in the context of employee recognition. Combining facial recognition improves identity verification and attendance tracking, making it easier for companies to manage employee attendance with greater effectiveness.­

Page 85 of 118 | Total Record : 1172