cover
Contact Name
Rahmat Hidayat
Contact Email
mr.rahmat@gmail.com
Phone
-
Journal Mail Official
rahmat@pnp.ac.id
Editorial Address
-
Location
Kota padang,
Sumatera barat
INDONESIA
JOIV : International Journal on Informatics Visualization
ISSN : 25499610     EISSN : 25499904     DOI : -
Core Subject : Science,
JOIV : International Journal on Informatics Visualization is an international peer-reviewed journal dedicated to interchange for the results of high quality research in all aspect of Computer Science, Computer Engineering, Information Technology and Visualization. The journal publishes state-of-art papers in fundamental theory, experiments and simulation, as well as applications, with a systematic proposed method, sufficient review on previous works, expanded discussion and concise conclusion. As our commitment to the advancement of science and technology, the JOIV follows the open access policy that allows the published articles freely available online without any subscription.
Arjuna Subject : -
Articles 1,172 Documents
Feature Selection Technique to Improve the Instances Classification Framework Performance for Quran Ontology Yuli Purwati; Fandy Setyo Utomo; Nikmah Trinarsih; Hanif Hidayatulloh
JOIV : International Journal on Informatics Visualization Vol 7, No 2 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.2.1195

Abstract

The Al-Quran is the sacred book of Muslims, and it provides God's word in the form of orders, instructions, and guidelines for people to follow to have happy lives both here and in the afterlife. Several earlier research has used ontologies to store the knowledge found in the Quran. The previous study focused on extracting the relationship between classes and instances or the "is-a relation" by classifying instances based on the referenced class. Based on the performance testing of the instances classification framework, the test results show that Support Vector Machine (SVM) with Term Frequency-Inverse Document Frequency (TF-IDF) and stemming operation had dropped the accuracy value to 65.41% when the test data size was increased to 30%. Likewise, with BPNN with TF-IDF and stemming operations. In the Indonesian Quran translation dataset with a test data size of 30%, the accuracy value drops to 57.86%. Instances classification based on the thematic topics of the Qur'an aims to connect verses (instances) to topics (classes) to get an overall picture of the topic and provide a better understanding to users. This study aims to apply the feature selection technique to the instances classification framework for the Al-Quran ontology and to analyze the impact of applying the feature selection technique to the framework with a small dataset and training data. The instances classification framework in this study consists of several stages: text-preprocessing, feature extraction, feature selection, and instances classification. We applied Chiq-Square as a technique to perform feature selection. SVM and BPNN as a classifier. Based on the experiment results, it can be concluded that the feature selection implementation using Chi-Square increases the value of precision, f-measure, and accuracy on the test data size from 40% to 60% in all datasets. The feature selection using Chi-Square and SVM classifier provides the highest precision value with a test data size of 60% on the Tafsir Quran dataset from the Ministry of Religious Affairs Indonesia: 64.36%. Furthermore, the feature selection implementation and BPNN classifier also increase the highest accuracy value with a test data size of 60% in the Quranic Tafsir dataset from the Ministry of Religion of the Republic of Indonesia: 63.09%.
Determining the Rice Seeds Quality Using Convolutional Neural Network Hidayat, Sidiq Syamsul; Rahmawati, Dwi; Prabowo, Muhamad Cahyo Ardi; Triyono, Liliek; Putri, Farika Tono
JOIV : International Journal on Informatics Visualization Vol 7, No 2 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.2.1175

Abstract

Seed inspection is crucial for plant nurseries and farmers as it ensures seed quality when growing seedlings. It is traditionally accomplished by expert inspectors filtering samples manually, but there are some challenges, such as cost, accuracy, and large numbers. Speed and accuracy were the main conditions for increasing agricultural productivity. Machine learning is a sub-science of Artificial Intelligence that can be applied in research on the classification of rice seed quality. The pipeline of a machine learning system is dataset collection, training, validation, and testing. Model making begins with taking data on the characteristics of rice seeds based on physical parameters in the form of seed shape and color. The dataset used is two thousand images divided into two categories, namely superior seeds and non-superior seeds. Training and Validation was conducted using the Convolutional Neural Network (CNN) algorithm with the concept of cross-validation on Google Collaboratory notebooks. The ratio split of train data and validation data in modeling from a dataset is 80:20. The result of the model formed is a model with the development of a Deep Convolutional Neural Network (Deep CNN) that can classify the digital image data of rice seeds from the results of data calls uploaded into the system. The results of the experiment conducted on 30 test data can be analyzed so that the system can classify superior and non-superior seeds with a precision value of 93% and a recall of 95%.
Mining Opinions on a Prominent Health Insurance Provider from Social Media Microblog: Affective Model and Contextual Analysis Approach Rasyada, Ihda; Barakbah, Ali Ridho; Amalo, Elizabeth Anggraeni
JOIV : International Journal on Informatics Visualization Vol 7, No 2 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.2.1771

Abstract

Social media plays a significant role in enhancing communication among organizations, communities, and individuals. Besides being a mode of communication, the data generated from these interactions can also be leveraged to assess the performance of an institution or organization. People may evaluate public companies based on the opinions of their users. However, user-supplied information is brief and written in natural language. In addition to being brief, the process of sending messages or engaging in other social media interactions contains a great deal of context information. This multiplicity of context can be utilized to conduct a more in-depth analysis of user opinion. This study presents a new approach to opinion mining for social media microblogging data by applying an affective model and contextual analyses. The affective model is applied for sentiment analysis to measure the degree of each adjective from user opinion by evaluating adjectives according to their varying levels of pleasure and arousal. The contextual analysis in this paper is modeled based on topic, user, adjective, and personal characteristics. The contextual analysis has four main features: (1) Temporal keyword sentiment context, (2) Temporal user sentiment context, (3) User impression context, and (4) Temporal user character context. Our affective model outperformed 75.6% the accuracy and 74.98% of F1-score, rather than SVM. In the experiment, the contextual analysis performed graph visualization of output results for each query feature for future development. Feature one to four successfully processes the query to produce a visualization graph.
Visualization Mapping of the Socio-Technical Architecture based on Tongkonan Traditional House Taufiq Natsir; Bakhrani Rauf; Faisal Syafar; Ahmad Wahidiyat Haedar; Faisal Najamuddin
JOIV : International Journal on Informatics Visualization Vol 7, No 3 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.3.1788

Abstract

The socio-technical architecture of constructing a community's traditional house is a zine-qua-non at the locus of developing tourism destinations in several areas worldwide. A socio-technical system is an old approach that is realigned with developing integrated tourism components, especially various tourist attractions based on local cultural treasures. The results of this qualitative research with a phenomenological approach analyze and explain the noumena (meaning) behind the phenomena (facts) regarding socio-technical architecture based on Tongkonan traditional houses in Tana Toraja, Indonesia. The study results found that architectural works are full of symbolic meaning in constructing Tongkonan traditional houses. The crystallization of basic values and value orientation as the noumena (meaning) behind the socio-technical architectural phenomenon of the Tongkonan traditional house that stands upright is because five pillars support it as a representation of 5A (Attractions, Accessibility, Accommodation, Amenity, Ansilarity) as a component of tourism development. The Tongkonan roof model, which at first glance looks like a person praying by raising their hands up or to God, the Creator of the universe, is proof of the basic values and orientation of the socio-cultural and spiritual values of the Toraja people. The image of a rooster, sun, and arrangement of horns mounted on the Tongkonan wall proves the rich treasures of local socio-cultural life (local wisdom, local genius) of the local community as a result of creativity and innovation that sustainably has value.
A Novel Approach of Animal Skin Classification Using CNN Model with CLAHE and SUCK Method Support Abdul Haris Rangkuti; Varyl Athala Hasbi
JOIV : International Journal on Informatics Visualization Vol 7, No 3 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.3.1153

Abstract

This study describes the process of classifying animal skin images which are rather difficult to obtain optimal image characteristics. For this reason, in the pre-processing stage, we propose two methods to support feature extraction: sharpening using a convolutional kernel (SUCK-Sharpening) and adaptive histogram equalization with limited contrast (CLAHE-Equalized). SUCK works by operating on these pixel values using direct math to build a new image; this final value is the new value of the current pixel. CLAHE overcomes the limitations of the global approach by performing local contrast enhancement. Because of the advantages of the two methods, it becomes a solution to get features processed at the feature extraction and classification stage. The process of animal skin imagery has characteristics in terms of shape and texture, including the characteristics of animal skin color. In this study, some experiments have been carried out on several CNN models, with an average classification accuracy of more than 70% using the sharpened and equalized methods on six animal skins. More detail, the average classification accuracy using 3 CNN models supported by two methods, namely Sharpening and Equalize on the CNN Resnet 50V2 model is 67.73% and 73.78%, InceptionV3 model at 82.13%, and 74.76% and Densenet121 models were 87.64% and 87.46 %. This research can be continued to improve the accuracy of other animal skin images, including determining fake or genuine skin images.
Ranjana Script Handwritten Character Recognition using CNN Jen Bati; Pankaj Raj Dawadi
JOIV : International Journal on Informatics Visualization Vol 7, No 3 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.3.1725

Abstract

This paper proposes a public image database for Ranjana script Handwritten Character Datasets (RHCD), publicly available for Ranjana script researchers or anyone interested in the subject. To the best of our knowledge, the Ranjana script Handwritten Character Dataset (RHCD) is the first publicly available database for Ranjana script researchers. Ranjana script descended from the Brahmi script, consists of 36 consonant letters, 16 vowel letters, and 10 numerical letters. The focus of this research is three-fold: the first is to create a new database for Ranjana script Handwritten Character Recognition; the second is to test the character recognition accuracy of the created RHCD using existing CNN algorithms like LeNET-5, AlexNET, and ZFNET algorithm; the third is to propose a model by investigating different hyper-tuning parameters to improve the recognition accuracy of the created RHCD. The research method applied in this study is dataset collection, digitization & cropping, pre-processing, dataset splitting, data augmentation, and finally, implementing the CNN model (existing and proposed). Performance evaluation is based on the test accuracy, precision, recall, and F1-score. The experiment result shows that our model ranks first, with a testing accuracy of 99.73% for 64x64 pixels resolution with precision, recall, and F1-score value 1. Creation and recognition of Ranjana script characters, vowel modifiers, and compound characters can be the next milestone to be achieved. Segmentation of words and sentences into characters and recognizing each character individually can be the next research domain.
Identification of Coffee Types Using an Electronic Nose with the Backpropagation Artificial Neural Network Roza Susanti; Zaini Zaini; Anton Hidayat; Nadia Alfitri; Muhammad Ilhamdi Rusydi
JOIV : International Journal on Informatics Visualization Vol 7, No 3 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.3.1375

Abstract

Coffee is one of the famous plants’ commodities in the world. There are some coffee powders such as Arabica dan Robusta. This study aimed to identify two various coffee powders, Arabica and Robusta based on the blended aroma profiles, employing the backpropagation Artificial Neural Network (ANN). Four taste sensors were employed, namely TGS 2602, 2610, 2611, and 2620, to capture the diverse coffee aroma. These detectors were combined with the aroma sensors having transducers integrated with signal amplifiers or processors, which featured a load of 10 KΩ resistance. Three aroma types were investigated, namely Arabica coffee, Robusta coffee, and without coffee beans. The neural network architecture consisted of four inputs from all sensors, with one hidden layer housing eight neurons. Two neuron outputs were employed for classification, with 70 samples used for training ANN for each type. During the training phase, the developed neural network showed an impressive accuracy rate of 91.90%. TGS 2602 and 2611 sensors showed the most significant differences among the three aroma types. When analyzing ground Robusta coffee, TGS 2602 and 2611 sensors recorded 2.967 volts and 1.263 volts, with a gas concentration of 17.92 ppm and 2441.8 ppm. Similarly, the sensors for ground Arabica coffee displayed 3.384 volts and 1.582 volts with a gas concentration of 20.445 ppm and 3058.5 ppm in both TGS 2602 and 2611, respectively. The implemented ANN with aroma sensor as input successfully identify the coffee powders.
K-Means Clustering Algorithm for Partitioning the Openness Levels of Open Government Data Portals Emigawaty Emigawaty; Kusworo Adi; Adian Fatchur Rochim; Budi Warsito; Adi Wibowo
JOIV : International Journal on Informatics Visualization Vol 7, No 3 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.3.1761

Abstract

More and more local governments in Indonesia are making their data available to the public. This benefits data scientists, researchers, business owners, and other potential users seeking datasets for empirical research and business innovation. However, just because Open Government Data (OGD) portals are accessible does not mean that they necessarily adhere to the established rules and principles of data openness. To evaluate the level of openness of 24 OGD portals in Indonesia, this study used the K-means Clustering algorithm to partition them into three levels: Leaders, Followers, and Beginners. A group of 30 participants, including researchers, data scientists, business enablers, and graduate students, rated the portals on 32 sub-questions related to the eight main principles of data disclosure, focusing on health, population, and education datasets. The study found that eight portals were categorized as Leaders, ten as Followers, and seven as Beginners regarding their level of openness. The study demonstrated that the K-means Clustering algorithm can be effectively used to assess the degree of openness of OGD portals in Indonesia based on eight main principles of data openness. The study recommends increasing the number of OGD portals in eastern territories to supplement the existing case studies in the western and central regions.
Economic Impact due Covid-19 Pandemic: Sentiment Analysis on Twitter Using Naïve Bayes Classifier and Support Vector Machine Aini, Qurrotul; Fauzi, Raffie Rizky; Khudzaeva, Eva
JOIV : International Journal on Informatics Visualization Vol 7, No 3 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.3.1474

Abstract

Covid-19 is an outbreak caused by severe acute respiratory syndrome. Covid-19 first appeared in Indonesia on March 2, 2020, with two confirmed cases and increased to 1285 cases in 30 provinces. One of the impacts of the Covid-19 pandemic is on the economic aspect, which has experienced a drastic decline in income. This study aims to classify public opinion to determine the level of public sentiment on the economic impact of the Covid-19 pandemic and to identify parameters that influence the accuracy of the sentiment analysis classification model. The methods used in this current research are Lexicon, Support Vector Machine (SVM), and Naive Bayes Classifier (NBC). First, Lexicon is used for scoring and labeling the preprocessed data. Second, SVM is used to classify the sentiment, then find the best accuracy using linear, radial, polynomial, and sigmoid kernels. Third, NBC is used to classify sentiment as a comparison method. The results indicated that 255 tweet data consisted of 44 positive tweets (17.25%), 46 neutral tweets (18.04%), and 165 negative tweets (64.71%). Therefore, it can be inferred that the economic impact on the Indonesian people due to the Covid-19 pandemic has a high negative sentiment value. In the performance, SVM yielded a better accuracy of 100%, precision, recall, and F-measure are 1. This study proves that selecting the kernel type and applying underfitting can improve the accuracy of SVM. Also, SVM can perform well on a small amount of training data.
Comparison of K-Means & K-Means++ Clustering Models using Singular Value Decomposition (SVD) in Menu Engineering Setiyawati, Nina; Bangkalang, Dwi Hosanna; Purnomo, Hindriyanto Dwi
JOIV : International Journal on Informatics Visualization Vol 7, No 3 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.3.1053

Abstract

The menu is one of the most fundamental aspects of business continuity in the culinary industry. One of the tools that can be used for menu analysis is menu engineering. Menu engineering is an analytical tool that assists restaurants, companies, and small and medium-sized enterprises (SMEs) in assessing and making decisions on marketing strategies, menu design, and sales so that it can produce maximum profit. In this study, several menu engineering models were proposed, and the performance of these models was analyzed. This study used a dataset from the Point of Sales (POS) application in an SME engaged in the culinary field. This research consists of three stages. First, pre-processing the data, comparing the models, and evaluating the models using the Davies Bouldin index. At the model comparison stage, four models are being compared: K-Means, K-Means++, K-Means using Singular Value Decomposition (SVD), and K-Means++ using SVD. SVD is used in the dataset transformation process. K-Means and K-Means++ algorithms are used for grouping menu items. The experiments show that the K-Means++ model with SVD produced the most optimal cluster in this research. The model produced an average cluster distance value of 0.002; the smallest Davies-Bouldin Index (DBI) value is 0.141. Therefore, using the K-Means++ model with SVD in menu engineering analysis produces clusters containing menu items with high similarity and significant distance between groups. The results obtained from the proposed model can be used as a basis for strategic decision-making of managing price, marketing strategy, etc., for SMEs, especially in the culinary business.

Page 50 of 118 | Total Record : 1172