cover
Contact Name
Rahmat Hidayat
Contact Email
mr.rahmat@gmail.com
Phone
-
Journal Mail Official
rahmat@pnp.ac.id
Editorial Address
-
Location
Kota padang,
Sumatera barat
INDONESIA
JOIV : International Journal on Informatics Visualization
ISSN : 25499610     EISSN : 25499904     DOI : -
Core Subject : Science,
JOIV : International Journal on Informatics Visualization is an international peer-reviewed journal dedicated to interchange for the results of high quality research in all aspect of Computer Science, Computer Engineering, Information Technology and Visualization. The journal publishes state-of-art papers in fundamental theory, experiments and simulation, as well as applications, with a systematic proposed method, sufficient review on previous works, expanded discussion and concise conclusion. As our commitment to the advancement of science and technology, the JOIV follows the open access policy that allows the published articles freely available online without any subscription.
Arjuna Subject : -
Articles 1,172 Documents
Preserving Indigenous Indonesian Batik Motif Using Machine Learning and Information Fusion Sumari, Arwin Datumaya Wahyudi; Aziza, Nadia Layra; Hani'ah, Mamluatul
JOIV : International Journal on Informatics Visualization Vol 9, No 5 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.5.3714

Abstract

Preserving Indonesia’s indigenous cultural heritage in the form of Batik with various motifs to maintain the nation’s continuity from generation to generation. Hundreds of Batik motifs are spread across multiple regions of Indonesia, along with their unique names and meanings, where each motif has a cultural and historical meaning behind it. The distinctive patterns of Batik motifs challenge the community to remember and distinguish them, so it is crucial to have an intelligent system. This study designed and implemented a Batik motif classification system based on machine learning’s Support Vector Machine (SVM) with a Radial Basis Function (RBF) kernel. The primary key to classifier performance is features. An assessment was carried out on the performance of two feature models: single features and fused features. The Gray Level Co-occurrence Matrix (GLCM) produces the texture features of the Batik motif, and the Moment Invariant (MI) is used to create the shape features of Batik motifs. The Union Fusion and XOR operators produce a single fused feature of the two features. The proposed combination of techniques, namely SVM and GLCM, outperforms the combination scenario of Multi Texton Histogram (MTH), Multi Texton Co-Occurrence Descriptor (MTCD), Multi Texton Co-occurrence Histogram (MTCH) with SVM, and the combination of GLCM with 1-NN as well as the combination techniques that employed information fusion. The experiment results showed that the proposed combination technique achieved an accuracy of 97%. It can be concluded that SVM (RBF) with GLCM yields the best Batik motif recognition system.
Brain Tumor Classification based on Convolutional Neural Networks with an Ensemble Learning Approach through Soft Voting Puspita, Kartika; Ernawan, Ferda; Alkhalifi, Yuris; Kasim, Shahreen; Erianda, Aldo
JOIV : International Journal on Informatics Visualization Vol 9, No 5 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.5.4609

Abstract

The brain is a vital organ that serves various purposes in the human body. Processing sensory data, generating muscle movements, and performing complex cognitive tasks have all historically relied heavily on the brain. One of the most common conditions affecting the brain is the growth of abnormal tissue in brain cells, leading to the development of brain tumors. The most common forms of brain tumors are pituitary, glioma, and meningioma, which are major global health issues. From these issues, there is a need for appropriate and prompt handling before the brain tumor disease becomes more severe. Quick handling is through an early disease detection approach, and computer vision is one of the trending early disease detection methods that can predict diseases using images. This research proposes a model in computer vision, namely the Convolutional Neural Network (CNN), with a soft voting ensemble learning strategy to classify brain tumors. The dataset consists of 7,023 images without tumors and MRI brain tumors such as glioma, meningioma, and pituitary with a resolution of 512x512 pixels. This experiment investigates classifier models such as VGG16, MobileNet, ResNet50, and DenseNet121, each of which has been optimized to maximize performance. The proposed soft voting ensemble strategy outperformed existing methods, with an accuracy of 97.67% and a Cohen's Kappa value of 0.9688. The proposed soft voting ensemble method approach has proven effective in improving the accuracy.
A Hybrid Approach for Malicious URL Detection Using Ensemble Models and Adaptive Synthetic Sampling Sujon, Khaled Mahmud; Hassan, Rohayanti; Zainodin, Muhammad Edzuan; Salamat, Mohamad Aizi; Kasim, Shahreen; Alanda, Alde
JOIV : International Journal on Informatics Visualization Vol 9, No 5 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.5.4627

Abstract

Malicious URLs pose a significant cybersecurity threat, often leading to phishing attacks, malware infections, and data breaches. Early detection of these URLs is crucial for preventing security vulnerabilities and mitigating potential losses. In this paper, we propose a novel approach for malicious URL detection by combining ensemble learning methods with ADASYN-based oversampling to address the class imbalance typically found in malicious URL datasets. We evaluated three popular machine learning classifiers, including XGBoost, Random Forest, and Decision Tree, and incorporated ADASYN (Adaptive Synthetic Sampling) to handle the class-imbalanced nature of our selected dataset. Our detailed experiments demonstrate that the application of ADASYN can significantly increase the performance of the predictive model across all metrics. For instance, XGBoost saw a 2.2% improvement in accuracy, Random Forest achieved a 1.0% improvement in recall, and Decision Tree displayed a 3.0% improvement in F1-score. The Decision Tree model, in particular, showed the most substantial improvements, particularly in recall and F1-score, indicating better detection of malicious URLs. Finally, our findings in this research highlighted the potential of ensemble learning, enhanced by ADASYN, for improving malicious URL detection and demonstrated its applicability in real-world cybersecurity applications.
Yolo-Drone: Detection Paddy Crop Infected Using Object Detection Algorithm Yolo and Drone Image Masykur, Fauzan; Prasetyo, Angga; Zulkarnain, Ismail Abdurrozaq; Kumalasari, Ellisia; Utomo, Pradityo
JOIV : International Journal on Informatics Visualization Vol 9, No 5 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.5.3472

Abstract

Crop failure is an undesirable result of rice planting for every farmer because it disrupts the economic stability of the family. One of the factors of crop failure in the rice planting process is the disease attack factor, which causes infection. Infected plants will interfere with the growth of rice, not optimally, because the green leaf substance, which is key to processing sunlight's nutrients, is unable to function. After all, it is covered by infection. Infection in the leaves covers the green leaf substance, or chlorophyll, so that the leaves are unable to absorb nutrients from sunlight. This problem is a separate concern in overcoming rice plant infections, which will result in crop failure. This paper discusses the detection of infected rice plants, particularly leaf infections, using drone camera images. Unmanned aircraft, also known as drones, fly above rice fields to capture images of rice plants, which are then used as datasets in training models to detect infected and healthy rice plants. The detection of disease presence in rice leaves is carried out using the You Only Look Once version 8 (YOLOv8) object detection algorithm, with a model trained using Google Colab Pro+. The results of training the model to detect healthy and infected plant leaves are the primary objectives of this study. The YOLOv8 object detection model, when applied to detect rice plants with two classes (healthy and infected), shows quite good results. This is indicated by the recall, precision, and F1-score values (0.99, 0.814, 0.90) approaching 1 in all classes.
Efficient Broker-Driven Request Packet Size Sekhi, Ihab; Nehéz, Károly
JOIV : International Journal on Informatics Visualization Vol 9, No 5 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.5.3131

Abstract

Efficient virtual machine (VM) allocation is fundamental in cloud computing to optimize resource utilization and ensure high performance. Traditional methods often fail to account for the variability in request packet sizes, resulting in inefficiencies and performance bottlenecks. This study introduces a novel broker-driven VM allocation approach integrated with fuzzy logic to optimize resource distribution and address these limitations dynamically. The proposed methodology employs a broker system for real-time monitoring and analysis of request packet sizes, leveraging fuzzy logic to adjust VM allocations based on fluctuating workload demands dynamically. Validation of the approach was conducted using real-world data from the Google Cloud Platform's Europe West3 region and t2d-standard machine types. Simulations executed with the Cloud Analyst tool across five scenarios demonstrated the method's efficacy compared to traditional techniques. The results from the third scenario were used as a representative example. Its findings include a 67.62% reduction in response time, a 26.64% decrease in data center processing time, a 26.65% improvement in request serving time, and a 70.65% reduction in total data transfer costs. The results of the other scenarios demonstrated comparable levels of improvement. The study highlights the effectiveness of a broker-driven, fuzzy logic-enhanced system in modern cloud computing, highlighting its adaptability and scalability. Future research should include incorporating energy consumption and fault tolerance parameters, applying the method to hybrid and multi-cloud environments, and integrating machine learning techniques.
Ad-hoc Networks and Cloud Databases for Renewable Energy Systems Saad Ahmed, Omar; Shuker Mahmoud, Mahmoud; Waleed Khalid, Rafal; Mohammed Khaleel, Basma; Waleed, Ghufran
JOIV : International Journal on Informatics Visualization Vol 9, No 5 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.5.4628

Abstract

The growing heterogeneity and decentralization in renewable energy infrastructures have resulted in a need for adaptive, scalable, and intelligent communication and data management solutions. On the one hand, centralized systems have limitations in terms of latency, scalability, and fault resilience, whereas purely decentralized systems can encounter challenges with data integration and long-term analytics. In this article, a hybrid architecture using mobile ad hoc networks and cloud databases to improve the collaborative operation of distributed renewable energy systems is introduced. The architecture leverages latency-aware routing protocols for hard real-time communication among edge devices, solar panels, wind turbines, and battery storage. At the same time, it uses cloud-based predictive analytics to enable more powerful capabilities, such as failure diagnostics and power scheduling. In extensive simulations, we demonstrated improvements of several orders of magnitude across key operational metrics, including latency reduction, throughput gains, energy efficiency, and scaling. Furthermore, introduced machine learning applications, a BiLSTM-CNN hybrid for fault prediction, and a reinforcement learning agent for energy dispatch, improving system flexibility and the ability to make informed decisions. The results demonstrate the potential of hybrid communication and analytics systems to enable next-generation smart grid applications by improving reliability, responsiveness, and resource allocation. This study adds to the existing knowledge base on intelligent energy by providing a design that can be easily replicated and scaled, while accounting for operational and long-term sustainability performance.
Drones and IoT for Enhancing Renewable Energy Integration Hameed, Maan; Abdulkareem Hameed, Nada; Natiq Abdulwahab, Imad; Hashim Qasim, Nameer; S. Alani, Saad
JOIV : International Journal on Informatics Visualization Vol 9, No 5 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.5.4630

Abstract

Ranging from monitoring in real-time to predictive maintenance and operational optimization, the increasing complexity of renewable energy systems requires sophisticated solutions. The article proposes a holistic solution that combines drones and IoT to improve installation efficiency and reduce incidents in wind, solar, and hydropower energy production. The study uses a hybrid approach that combines sensor analytics, drone-assisted infrastructure inspection, edge computing for latency minimization, and multivariate modeling to quantify the system's enhancement. Field trials involved three renewable power plants over the course of six months and included the acquisition of more than 10,000 data related to power plant operations. It was shown that integrating a thermal, an RGB, and a LiDAR sensor on a drone resulted in a significant increase in inspection efficiency, fault coverage, and spatial resolution. At the same time, deployed IoT sensors continuously monitor inverter temperature, vibration frequency, and energy output. Statistical regression models revealed highly significant relationships among the frequency of UAV inspection, IoT latency, and energy efficiency, and algorithmic modules, such as support vector machines, Kalman filters, and ant colony optimization, further improved fault diagnosis, data fusion, and pathfinding. The results validate the applicability of drones and IoT for enhancing system uptime, dependability, and predictability without introducing extra operational load. This work lays out a scalable, modular approach, feasible for deployment in smart grid scenarios, which enables sustainable, intelligent energy management.
Optimizing Machine Learning Models for Anomaly-based IDS using Intercorrelation Threshold Wahyu Adi, Prajanto; Sugiharto, Aris; Malik Hakim, Muhammad; Rizki Saputra, Naufal; Hanif Setiawan, Syariful
JOIV : International Journal on Informatics Visualization Vol 9, No 6 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.6.3355

Abstract

This study aims to improve the performance of attack detection on the Bot-IoT dataset that faces class imbalance. The method used involves developing a feature selection model based on the Pearson correlation coefficient between features, with an adaptive threshold applied. The datasets used consist of two types: D1, with the 10 best features, and D2, with all features. The oversampling technique is applied to the minority class, followed by calculating feature correlations to determine the best feature using a threshold based on the average of the highest and lowest correlations. The feature selection process is carried out iteratively, with performance testing across several machine learning algorithms, including KNN, Random Forest, Logistic Regression, and SVM. The results show that the proposed feature selection method can improve the performance of the minority class without sacrificing the majority class's performance. On the D1 dataset, the Random Forest algorithm achieved 96% accuracy, while KNN achieved 93%. On the D2 dataset, KNN achieved balanced performance, with average precision, recall, and F1-score of 0.99 for both classes, while Random Forest achieved lower results on the minority class. The implications of this study indicate that correlation-based feature selection can improve attack detection performance on datasets with high class imbalance, and it can be implemented in future studies to address similar problems in IoT-based intrusion detection systems.
Hybrid Logistic Regression Random Forest on Predicting Student Performance Rohman, Muhammad Ghofar; Abdullah, Zubaile; Kasim, Shahreen; Rasyidah, -
JOIV : International Journal on Informatics Visualization Vol 9, No 2 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.2.3972

Abstract

The research aims to investigate the effects of unbalanced data on machine learning, overcome imbalanced data using SMOTE oversampling, and improve machine learning performance using hyperparameter tuning. This study proposed a model that combines logistic regression and random forests as a hybrid logistic regression, random forest, and random search SV that uses SMOTE oversampling and hyperparameter tuning. The result of this study showed that the prediction model using the hybrid logistic regression, random forest, and random search SV that we proposed produces more effective performance than using logistic regression and random forest, with accuracy, precision, recall, and F1-score of 0.9574, 0.9665, 0.9576. This can contribute to a practical model to address imbalanced data classification based on data-level solutions for student performance prediction.
Prediction of ROI Achievements and Potential Maximum Profit on Spot Bitcoin Rupiah Trading Using K-means Clustering and Patterned Dataset Model Parlika, Rizky; Isnanto, R. Rizal; Rahmat, Basuki
JOIV : International Journal on Informatics Visualization Vol 8, No 3-2 (2024): IT for Global Goals: Building a Sustainable Tomorrow
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.3-2.3120

Abstract

Since Satoshi Nakamoto first proposed the idea of bitcoin in 2009, the cryptocurrency and prediction methods for it have grown and changed exceptionally quickly. The Patterned Dataset Model was a valuable tool in earlier studies to explain how changes in the price of Bitcoin affect the movements of other cryptocurrencies in a digital trading market. Three different kinds of datasets are generated by this model: patterned datasets under full conditions, patterned datasets under dropping prices (Crash), and patterned datasets under rising prices (Moon). The K-means approach was then used to cluster these three datasets. Specifically, each dataset was split into two clusters, and the clustering score was determined by utilizing eight unique clustering metrics. Consequently, the best clustering score was found in the patterned dataset in the crash situation. Additionally, from 2022 to 2024, the raw data from this crash-condition-patterned dataset is used to determine the possibility of reaching maximum profit and return on investment (ROI) daily and monthly. According to the calculation results, the range computed over the course of a whole month (30 to 31 days) is significantly larger than the daily range (24 hours multiplied by one month), which represents the most significant profit and ROI attained before the emergence of the first diamond crash level. This research also covers the application of a deep learning model to forecast patterned datasets for crash scenarios that may occur many days in advance. The ConvLSTM2D Model performs better in predicting pattern dataset values for the subsequent crash scenario, according to the hyperparameter comparison between the Gated Recurrent Unit (GRU) Model and the 2D Convolutional Long Short-Term Memory Model.

Page 98 of 118 | Total Record : 1172