cover
Contact Name
Rahmat Hidayat
Contact Email
mr.rahmat@gmail.com
Phone
-
Journal Mail Official
rahmat@pnp.ac.id
Editorial Address
-
Location
Kota padang,
Sumatera barat
INDONESIA
JOIV : International Journal on Informatics Visualization
ISSN : 25499610     EISSN : 25499904     DOI : -
Core Subject : Science,
JOIV : International Journal on Informatics Visualization is an international peer-reviewed journal dedicated to interchange for the results of high quality research in all aspect of Computer Science, Computer Engineering, Information Technology and Visualization. The journal publishes state-of-art papers in fundamental theory, experiments and simulation, as well as applications, with a systematic proposed method, sufficient review on previous works, expanded discussion and concise conclusion. As our commitment to the advancement of science and technology, the JOIV follows the open access policy that allows the published articles freely available online without any subscription.
Arjuna Subject : -
Articles 62 Documents
Search results for , issue "Vol 8, No 1 (2024)" : 62 Documents clear
NasiQu: Designing Mobile Applications with the Concept of Social Entrepreneurship for Hunger People Using Agile Methods Hidayat, Hendra; Yulastri, Asmar; Susanto, Perengki; Ardi, Zadrian; Yustisia, Henny
JOIV : International Journal on Informatics Visualization Vol 8, No 1 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.1.1896

Abstract

Entrepreneurship is becoming an essential part of everyday life today. But the social problems of society, especially cases of hunger, have also become an essential issue to date. Digital entrepreneurship has become a trend recently, with many digital business platforms and emerging applications. However, combining digital entrepreneurship with social activities becomes more attractive, which we know as Digital Social Entrepreneurship. This study describes and explains the stages in designing NasiQu, a mobile social entrepreneurship application, to see how agile can make digital social entrepreneurship interesting by involving people's sense of concern for people in need, in this case, hungry people. The Agile method, one of many used in software development, is the one that is being used. The Agile method is a short-term system development approach that calls for quick adaptation and developers who can work with any change. The results of this product from NasiQu can facilitate donations in the form of packaged rice to those who need food; in the case of this implementation, it is still specifically for orphans. In this application, there are three users, namely donors, admins, and partners. All these users have different roles and application usage flows. In addition, this application makes food donation activities more effective and can be done anywhere and anytime. It is hoped that the ongoing implementation of this activity will help many people who need food and impact opening new job opportunities.
Cluster Analysis of Japanese Whiskey Product Review Using K-Means Clustering Witarsyah, Deden; Akbar, Moh Adli; Praditha, Villy Satria; Sugiat, Maria
JOIV : International Journal on Informatics Visualization Vol 8, No 1 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.1.2601

Abstract

Since 2008, the Japanese whiskey business has grown steadily. Overall, the whiskey market (at factory price) is expected to reach $2.95 billion in 2019, accounting for 8.6 percent of the entire alcoholic beverage industry. The rise in popularity of Japanese whiskey is associated with the country's growing international reputation. Founded 1985 as an independent bottler, Master of Malt was the first company to service clients who ordered single malt whiskey through the mail-order system. Master of Malt's omnichannel approach encompasses all channels available to the company. Known as their 'omnichannel,' this refers to the organization's capability to provide speed and precision from any place at any time. As their brand has grown over the years, they have used various marketing strategies, including a website redesign and rebuild that involved the creation of all relevant content and designing and constructing landing pages for their website. Following a clustering technique, we discovered that the data is being divided into four distinct groups and that these clusters may serve as a recommender system based on the occurrence of terms in each of the categories. Our summarizing component combined phrases related to the exact subtopics and provided users with a concise summary and sentimental information about the group of phrases.
Text-Based Content Analysis on Social Media Using Topic Modelling to Support Digital Marketing Buana, Gandhi Surya; Tyasnurita, Raras; Puspita, Nindita Cahya; Vinarti, Retno Aulia; Mahananto, Faizal
JOIV : International Journal on Informatics Visualization Vol 8, No 1 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.1.1636

Abstract

This study aims to create Social Media Analytics (SMA) tools to help Digital Marketers or Content Creators create content topics for creating text-based Instagram content and support digital marketing strategy. Since no SMA tools can provide topic discovery for text-based Instagram content, this research aims to make an SMA tool. The data requirements to make an SMA tool include content text, content caption text, likes, comments, upload time, and content category obtained through the Instascrapper. The method used in this study is the Topic Modelling method using the Latent Dirichlet Allocation (LDA) approach to find the most dominant topic in the content. Optical Character Recognition (OCR) performs an image transformation process to extract text from text-based Instagram content images. The results of SMA tool creation are tested on three expert users, which shows that 93% of test participants could use the SMA to find topic references, and 85% can still be used by users even though they find it difficult. Since the test result shows that SMA tools still need development, for further research, SMA tools can focus on developing the user experience to increase the value of user acceptance by paying attention to the ease of the SMA tools. Also, SMA tools can focus on target users such as Data Analysts, Business Intelligence Analysts, or others within a company to support decision-making for the marketing department.
Fuzzy Soft Set Clustering for Categorical Data Yanto, Iwan Tri Riyadi; Apriani, Ani; Wahyudi, Rofiul; WaiShiang, Cheah; Suprihatin, -; Hidayat, Rahmat
JOIV : International Journal on Informatics Visualization Vol 8, No 1 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.1.2364

Abstract

Categorical data clustering is difficult because categorical data lacks natural order and can comprise groups of data only related to specific dimensions. Conventional clustering, such as k-means, cannot be openly used to categorical data. Numerous categorical data using clustering algorithms, for instance, fuzzy k-modes and their enhancements, have been developed to overcome this issue. However, these approaches continue to create clusters with low Purity and weak intra-similarity. Furthermore, transforming category attributes to binary values might be computationally costly. This research provides categorical data with fuzzy clustering technique due to soft set theory and multinomial distribution. The experiment showed that the approach proposed signifies better performance in purity, rank index, and response times by up to 97.53%. There are many algorithms that can be used to solve the challenge of grouping fuzzy-based categorical data. However, these techniques do not always result in improved cluster purity or faster reaction times. As a solution, it is suggested to use hard categorical data clustering through multinomial distribution. This involves producing a multi-soft set by using a rotated based soft set, and then clustering the data using a multivariate multinomial distribution. The comparison of this innovative technique with the established baseline algorithms demonstrates that the suggested approach excels in terms of purity, rank index, and response times, achieving improvements of up to ninety-seven-point fifty three percent compared to existing methods.
Comparison Analysis of CXR Images in Detecting Pneumonia Using VGG16 and ResNet50 Convolution Neural Network Model Izdihar, Nur; Rahayu, Syarifah Bahiyah; Venkatesan, K
JOIV : International Journal on Informatics Visualization Vol 8, No 1 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.1.2258

Abstract

Pneumonia is a lung disease that causes serious fatalities worldwide. Pneumonia can be complicated for medical professionals to identify since it shares similarities with other lung diseases like lung cancer and cardiomegaly. Hospitals face difficulty finding professional radiologists who help to detect pneumonia through radioactive processes. This research proposes VGG16 and ResNet50-based system architecture using the Convolutional Neural Network (CNN) module, which allows the detection of pneumonia. This research identifies pneumonia using chest X-ray (CXR) images through VGG16 and ResNet50 of CNN model architectures. The performance of the proposed models is compared by performance parameters such as processing time, accuracy, and loss. The Pneumonia dataset was obtained from Kaggle and divided into 70% for training, 15 % for validation, and 15% for testing. The results show that the proposed ResNet50 model architecture has a better result than the VGG16 model architecture. It can be clearly observed based on both models' loss and accuracy results. Moreover, the processing time for ResNet50 in training and predicting the CXR images is much faster than the VGG16 model's processing time. Hence, ResNet50 performs better than VGG16 based on the result of loss and accuracy and the processing time for the model to train and predict the data. In conclusion, the findings show the capability of CNN models for detecting pneumonia in CXR images, thus reducing the burden of professional radiologists.
Offline Handwriting Writer Identification using Depth-wise Separable Convolution with Siamese Network Suteddy, Wirmanto; Agustini, Devi Aprianti Rimadhani; Atmanto, Dastin Aryo
JOIV : International Journal on Informatics Visualization Vol 8, No 1 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.1.2148

Abstract

Offline handwriting writer identification has significant implications for forensic investigations and biometric authentication. Handwriting, as a distinctive biometric trait, provides insights into individual identity. Despite advancements in handcrafted algorithms and deep learning techniques, the persistent challenges related to intra-variability and inter-writer similarity continue to drive research efforts. In this study, we build on well-separated convolution architectures like the Xception architecture, which has proven to be robust in our previous research comparing various deep learning architectures such as MobileNet, EfficientNet, ResNet50, and VGG16, where Xception demonstrated minimal training-validation disparities for writer identification. Expanding on this, we use a model based on similarity or dissimilarity approaches to identify offline writers' handwriting, known as the Siamese Network, that incorporates the Xception architecture. Similarity or dissimilarity measurements are based on the Manhattan or L1 distance between representation vectors of each input pair. We train publicly available IAM and CVL datasets; our approach achieves accuracy rates of 99.81% for IAM and 99.88% for CVL. The model was evaluated using evaluation metrics, which revealed only two error predictions in the IAM dataset, resulting in 99.75% accuracy, and five error predictions for CVL, resulting in 99.57% accuracy. These findings modestly surpass existing achievements, highlighting the potential inherent in our methodology to enhance writer identification accuracy. This study underscores the effectiveness of integrating the Siamese Network with depth-wise separable convolution, emphasizing the practical implications for supporting writer identification in real-world applications.
Artificial Intelligence and Machine Learning for Green Shipping: Navigating towards Sustainable Maritime Practices Nguyen, Hoang Phuong; Nguyen, Cao Thao Uyen; Tran, Thi Men; Dang, Quoc Hai; Pham, Nguyen Dang Khoa
JOIV : International Journal on Informatics Visualization Vol 8, No 1 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.1.2581

Abstract

This paper aims to investigate the role that artificial intelligence (AI) plays in promoting sustainability in the marine industry. The report demonstrates the potential of AI-driven technology to improve vessel operations, decrease emissions, and promote environmental stewardship. This potential is shown by detailed examination of existing trends, problems, and possibilities. Several vital studies highlight the significance of policy interventions that encourage the use of artificial intelligence. These interventions include financial incentives, legal frameworks, and programs to increase capability. Throughout this work, the importance of the role that artificial intelligence plays in driving efficiency, safety, and sustainability is emphasized. This work also highlights the urgent need for action to address climate change and environmental degradation in the marine sector. The marine industry can lessen its carbon footprint, decrease pollution, and improve ecosystem health if it shifts to various alternative fuels, renewable energy sources, and technologies powered by artificial intelligence. At the end of this work, an appeal is made to policymakers, industry stakeholders, and technology providers, urging them to prioritize investments in artificial intelligence research and development and to create collaboration to speed up the transition to a marine sector that is more sustainable and resilient.
Batik Classification using Microstructure Co-occurrence Histogram Minarno, Agus Eko; Soesanti, Indah; Nugroho, Hanung Adi
JOIV : International Journal on Informatics Visualization Vol 8, No 1 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.1.2152

Abstract

Batik Nitik is a distinctive form of batik originating from the culturally rich region of Yogyakarta, Indonesia. What sets it apart from other batik styles is its remarkable motif similarity, a characteristic that often poses a considerable challenge when attempting to distinguish one design from another. To address this challenge, extensive research has been conducted with the primary objective of classifying Batik Nitik, and this research leverages an innovative approach combining the microstructure histogram and gray level co-occurrence matrix (GLCM) techniques, collectively referred to as the Microstructure Co-occurrence Histogram (MCH).The MCH method offers a multi-faceted approach to feature extraction, simultaneously capturing color, texture, and shape attributes, thereby generating a set of local features that faithfully represent the intricate details found in Batik Nitik imagery. In parallel, the GLCM method excels at extracting robust texture features by employing statistical measures to portray the subtle nuances within these batik patterns. Nevertheless, the mere fusion of microstructure and GLCM features doesn't inherently guarantee superior classification performance. This research paper has meticulously examined many feature fusion scenarios between microstructure and GLCM to pinpoint the optimal configuration that would yield the most accurate results. The dataset used consists of 960 Batik Nitik samples, comprising 60 categories. The classifiers employed in this study are K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Decision Tree (DT), Naïve Bayes (NB), and Linear Discriminant Analysis (LDA). Based on the experimental results, the fusion of microstructure and GLCM features with the (LDA) classifier yields the best performance compared to other scenarios and classifiers.
Reducing Cognitive Bias of Pre-Service History Teachers through Augmented Reality Elfa Michellia Karima; Nurlizawati Nurlizawati; Firza Firza; Nur Fatah Abidin; Yusuf Ibrahim
JOIV : International Journal on Informatics Visualization Vol 8, No 1 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.1.2222

Abstract

Cognitive biases can be problematic and dangerous in history learning. This study aimed to identify the extent to which the independent variables of the case study method and augmented reality influence the dependent variables, as well as evaluate the strength and direction of the relationship between these variables in reducing the cognitive biases of pre-service history teachers. The method used is multiple linear regression to identify the extent to which the application of the case method and the use of augmented reality in learning affect the dependent variable under study. The results showed that augmented reality contributed to the understanding of the history of prospective pre-service history teachers more than the case study method. The effect of the case study method was 7.6% on historical knowledge, and augmented reality media had a 13.9% effect on historical experience. Lecturers can use augmented reality in learning for prospective pre-service history teachers to increase student understanding of history learning material and reduce cognitive biases. This research has implications for using technology and digitalization in history learning for prospective pre-service history teachers to understand history, conceptions, and past events and reduce bias. Understanding history is essential for prospective pre-service history teachers. Prospective pre-service history teachers must also understand a historical event broadly and from various perspectives. Technology-based learning in history learning is one of the right ways to avoid cognitive bias.
Performance Analysis of Feature Mel Frequency Cepstral Coefficient and Short Time Fourier Transform Input for Lie Detection using Convolutional Neural Network Kusumawati, Dewi; Ilham, Amil Ahmad; Achmad, Andani; Nurtanio, Ingrid
JOIV : International Journal on Informatics Visualization Vol 8, No 1 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.1.2062

Abstract

This study aims to determine which model is more effective in detecting lies between models with Mel Frequency Cepstral Coefficient (MFCC) and Short Time Fourier Transform (STFT) processes using Convolutional Neural Network (CNN). MFCC and STFT processes are based on digital voice data from video recordings that have been given lie or truth information regarding certain situations. Data is then pre-processed and trained on CNN. The results of model performance evaluation with hyper-tuning parameters and random search implementation show that using MFCC as Voice data processing provides better performance with higher accuracy than using the STFT process. The best parameters from MFCC are obtained with filter convolutional=64, kerneconvolutional1=5, filterconvolutional2=112, kernel convolutional2=3, filter convolutional3=32, kernelconvolutional3 =5, dense1=96, optimizer=RMSProp, learning rate=0.001 which achieves an accuracy of  97.13%, with an AUC value of 0.97. Using the STFT, the best parameters are obtained with filter convolutional1=96, kernel convolutional1=5, convolutional2 filters=48, convolutional2 kernels=5, convolutional3 filters=96, convolutional3 kernels=5, dense1=128, Optimizer=Adaddelta, learning rate=0.001, which achieves an accuracy of 95.39% with an AUC value of 0.95. Prosodics are used to compare the performance of MFCC and STFT. The result is that prosodic has a low accuracy of 68%. The analysis shows that using MFCC as the process of sound extraction with the CNN model produces the best performance for cases of lie detection using audio. It can be optimized for further research by combining CNN architectural models such as ResNet, AlexNet, and other architectures to obtain new models and improve lie detection accuracy.