cover
Contact Name
Rahmat Hidayat
Contact Email
mr.rahmat@gmail.com
Phone
-
Journal Mail Official
rahmat@pnp.ac.id
Editorial Address
-
Location
Kota padang,
Sumatera barat
INDONESIA
JOIV : International Journal on Informatics Visualization
ISSN : 25499610     EISSN : 25499904     DOI : -
Core Subject : Science,
JOIV : International Journal on Informatics Visualization is an international peer-reviewed journal dedicated to interchange for the results of high quality research in all aspect of Computer Science, Computer Engineering, Information Technology and Visualization. The journal publishes state-of-art papers in fundamental theory, experiments and simulation, as well as applications, with a systematic proposed method, sufficient review on previous works, expanded discussion and concise conclusion. As our commitment to the advancement of science and technology, the JOIV follows the open access policy that allows the published articles freely available online without any subscription.
Arjuna Subject : -
Articles 1,172 Documents
Prediction Analysis of Greeting Gestures Based on Recurrent Neural Networks Wibowo, Angga; Kurnianingsih, -; Sato-Shimokawara, Eri
JOIV : International Journal on Informatics Visualization Vol 9, No 3 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.3.2917

Abstract

Human activity recognition, such as rehabilitation, sports, human behavior, etc., is developing rapidly. A Recurrent Neural Network (RNN) is a practical approach to human activity recognition research and sequential data. However, studies on recognizing human activities rarely study culture, including greeting gestures. And studies seldom use small datasets when employing the RNN approach, as they typically utilize large amounts of data to conduct such studies. This study aims to predict greeting gestures from Japan and Indonesia with limited data. This study proposes and compares six RNN architecture methods, including Long Short-Term Memory (LSTM), Bidirectional RNN (BRNN), Gated Recurrent Unit (GRU), Vanilla RNN (VRNN), Deep RNN (DRNN), and Hierarchical RNN (HRNN), which have been modified with regularization to handle overfitting. We evaluate using Mean Squared Error (MSE), Root Mean Squared Error (MSE), Mean Absolute Error (MAE), and Coefficient of Determination (R²). The experimental results show that LSTM has the best MSE, RMSE, and MAE values, with MSE of 0.0773479, RMSE of 0.2781149, and MAE of 0.2402451, while GRU has the best R² value of 0.0267571. The conclusion of this study indicates that LSTM and GRU are more suitable than other models for solving this problem. Therefore, it can be beneficial for future research to address the challenges of small data and overfitting in sequential data and human activity recognition, particularly in the context of greeting gestures. Future work can utilize data augmentation, proper parameter selection, and incorporate data from multiple individuals to enhance the accuracy of the model.
Visual Analytic for Traffic Impact Assessment Chan, Jia Chun; Fahad, Nafiz; Goh, Kah Ong Michael; Tee, Connie
JOIV : International Journal on Informatics Visualization Vol 8, No 3 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.3.2314

Abstract

This study strives to promote the state of traffic impact assessment through high-end visual analytics by incorporating spatial and temporal data visualization to enhance traffic management. Based on a dataset on traffic flow at three major intersections, we married data cleaning, integration, and transformation to set out for a detailed visual analysis. Thus, the critical materials comprise the traffic count in multiple lanes, vehicle types, and saturation flow rates to understand the road network's capacity. They essentially explored the traffic volume variations daily and hourly and pattern identification using heat maps, parallel coordinate charts, and bar plots. Thus, the findings expose the remarkable traffic volume and pattern differences by distinguishing peak and off-peak hours on weekdays and weekends. The level of service at each junction was determined by the volume-to-capacity ratio, identifying potential congested areas. As such, this work points to the importance of further improvements to visual analytic techniques to accurately predict traffic patterns and evaluate traffic management strategies effectively. Predictive models based on visual analytic findings can pave the way for proactive traffic control and congestion mitigation, making urban traffic management more efficient and safer. The current study provides a scaffold for additional exploration of the above-detailed methods and their penal outcomes in urban development planning and policy provision in terms of developing sustainable traffic control strategies and real-time decision-making improvements.
A Review of Livestock Smart Farming for Sustainable Food Security Zaabar, Liyana Safra; Yacob, Adriana Arul; Nathan, Deventhren Kamala; Hing Yip, Emmerich Wong; Mat Razali, Noor Afiza
JOIV : International Journal on Informatics Visualization Vol 9, No 2 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.2.2794

Abstract

Maintaining food security via sustainable farming methods is a significant problem as the global population grows. This study aims to examine the impact of smart farming methods on enhancing farm animal output to satisfy rising demand while fostering sustainability. Smart livestock farming incorporates automation, Internet of Things (IoT) sensors, and machine learning algorithms to improve production, efficiency, and resource utilization. With an emphasis on essential factors including automated feeding, environmental monitoring, and health tracking, this study takes a methodical approach to reviewing IoT-based livestock farming. The efficiency of several sensor technologies, including motion, temperature, humidity, and biometric sensors, is examined in gathering data and making decisions in real time. The potential of machine learning methods like pattern identification, anomaly detection, and predictive analytics to maximize the production and health of farm animals is assessed. According to the results, IoT-driven livestock farming improves illness diagnosis, minimizes resource waste, and optimizes feeding practices, increasing production efficiency. These developments minimize the impact on the environment while promoting steady food production. Additionally, less human interference results from automation in livestock production, which lowers costs and improves decision-making. This study demonstrates how smart agricultural technology may be used to address issues related to food security. Further research is needed to increase real-time data processing, hone machine learning models, and investigate affordable options for broadly adopting these ideas into practice. Livestock management may be transformed, guaranteeing a robust and sustainable agricultural environment.
Problem-Frame-Oriented Requirements Traceability to Enhance Requirements Management ShengWen, Xiao; Hassan, Sa'adah; Che Pa, Noraini
JOIV : International Journal on Informatics Visualization Vol 8, No 3-2 (2024): IT for Global Goals: Building a Sustainable Tomorrow
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.3-2.3476

Abstract

Managing software requirements is a challenge in software development and maintenance. Requirements changes are inevitable, particularly in a rapid iterative development approach that leads to occasional changes in software requirements. Unable to manage this properly will impact the overall quality of the software. Thus, requirements traceability is essential because it ensures that all requirements are adequately addressed, changes are managed effectively, and that there's a clear linkage between business requirements and the system's functionality. Inadequate traceability mechanisms can make changing the requirements and detecting their impact difficult. Thus, it is crucial to establish precise requirements traceability and maintain clear links to manage the requirement changes effectively. Our research explores using a problem frames modeling approach to address this issue.  It starts by representing requirements as problems, creating a requirements relationship diagram, and generating a corresponding relationship matrix. The values in the traceability matrix help identify which elements are most affected by requirement changes, allowing developers to prioritize changes that minimize overall system impact. Furthermore, using problem frame modeling, complex problems can be broken down into manageable sub-problems, providing a clear structure for understanding the requirements. Additionally, a tool has been created to streamline the process, and a case study is used to demonstrate the functionalities. An evaluation has been conducted to assess the usability of the proposed work.  The requirements relationship diagrams and relationship matrices visually and quantitatively map the links between requirements, enabling traceability and identifying the impact of changes in requirements.
Predictive AC Control Using Deep Learning: Improving Comfort and Energy Saving Mohd Ameeruddin, Ahmad Azhan bin; Tan, Wooi-Nee; Gan, Ming-Tao; Yip, Sook-Chin
JOIV : International Journal on Informatics Visualization Vol 7, No 3-2 (2023): Empowering the Future: The Role of Information Technology in Building Resilien
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.3-2.2345

Abstract

The growing global population and the availability of energy-hungry smart devices are critical factors in today's alarmingly high electricity usage. The majority of energy used in urban areas is consumed by buildings, with heating, ventilation, and air conditioning (AC) systems accounting for a significant amount of energy use. This project proposes an AC controlling algorithm that uses the Internet of Things sensors and a deep learning framework in temperature prediction to control a single AC unit. The algorithm consists of a Long Short-Term Memory (LSTM) model to predict the indoor temperature for the next J minutes. The highlight of this model is its capacity to predict the future temperature based on the predetermined AC status, whether it is switched on or off. The AC unit will be turned off if the J-minute predicted temperature is within the desired thermal comfort range, and it will be turned back on if the sensor readings exceed the upper pre-set threshold. The experiment is performed on the dataset collected by Chulalongkorn University Building Energy Management System (CU-BEMS). The LSTM prediction model developed using CU-BEMS data yields an average Root Mean Squared Error and Mean Absolute Error of 0.08 and 0.03, respectively. A half-day simulation is also performed in controlling the AC unit from 7:39 a.m. to 11:35 a.m. The proposed algorithm shows that 49.00% of the time, the AC unit can be turned off while the thermal range is maintained between 27ºC to 27.9ºC, providing a strategy for managing the AC unit and achieving energy savings.
Integrating Spatial Computing with Clinical Pathology for Enhanced Diagnosis and Treatment Informatics in Healthcare Chituru, Chinwe Miracle; Ho, Sin-Ban; Chai, Ian
JOIV : International Journal on Informatics Visualization Vol 8, No 3-2 (2024): IT for Global Goals: Building a Sustainable Tomorrow
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.3-2.2951

Abstract

This paper investigates spatial computing, which is a pathological transformational modern technology that integrates the physical and digital realms and has the potential to revolutionize pathology healthcare. Pathology as a medical specialist plays a crucial role in patient care by providing essential information for diagnosis, treatment planning, and disease monitoring. It studies and diagnoses diseases by examining tissues, organs, bodily fluids, and cells. Pathology is a broad field with three main branches: Anatomic pathology, Clinical pathology, and Molecular pathology. This study investigates the possibilities of spatial computing in radiography and clinical pathology with emphasis on diagnosis accuracy, medical education, workflow efficiency, and the outcomes in the patients. Augmented Reality (AR) medical devices guide pathologists in real-time during diagnostics procedures. The digital reproduction of tissue samples to allow pathologists to examine specimens in three dimensions is a significant utilization of spatial computing in virtual microscopy. This process allows remote collaboration between pathologists and laboratories, provides health informatics as seen in electronic health records (EHRs), improves diagnosis, and presents a platform with learning experiences in the medical field. Patients can interact with three-dimensional simulations of their anatomy, which helps them make more educated treatment decisions provided via the pathology findings and treatment alternatives in an immersive format. As this technology advances, its potential to transform pathology practice and improve patient care remains high. This review describes technological perspectives and discusses the statistical methods, clinical applications, potential obstacles, and directions of spatial computing in clinical pathology.
Design of Tools for Visualizing Thermodynamic Concepts in Steam Power Plant Trainer Processes with Web-Based Exploratory Data Analysis (EDA) Karudin, Arwizet; Leni, Desmarita; Lapisa, Remon; Kusuma, Yuda Perdana; Abbas, Muhammad Rabiu
JOIV : International Journal on Informatics Visualization Vol 8, No 3 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.3.2139

Abstract

Thermodynamics is considered one of the most complex and challenging subjects for many students. This is primarily due to comprehending abstract concepts such as entropy, enthalpy, and energy flow, which involve complex mathematical equations and are rarely accompanied by tangible visualizations. This research aims to design, develop, and test a data-based visualization tool for thermodynamics testing results. This study collected and processed data from thermodynamics testing and simulations, such as the mini-steam power plant trainer used as a teaching aid in thermodynamics education, as the foundation for designing a data-based visualization tool for thermodynamics concepts. The visualization tool was created using the Python programming language integrated with the web-based Streamlit framework. The designed visualization tool encompasses various features, including automated data reporting, visualization of variable correlations using correlation heatmaps, Sankey diagrams for visualizing energy flow, and the capability to predict electrical output using machine learning integrated with three different machine learning algorithms. The visualization tool was evaluated by thermodynamics experts using a Likert scale. Based on the results obtained, the experts gave an average score of 4 in the information accuracy aspect in the good category. This shows that the information displayed in this visualization tool is by thermodynamics learning at Padang State University. In the visualization aspect, experts gave an average score of 4.25, which is in the Good and Very Good range. In alignment with the education aspect, experts gave an average score of 3.75, which is close to the good category. This shows that this aspect is considered suitable for studying thermodynamics, although shortcomings still need to be corrected. Experts gave a relatively high assessment of the Ease-of-Use aspect, with an average score of 4.5, with a range of Good and Very Good. This enables students to better understand complex patterns, cause-and-effect relationships, and parameter changes within thermodynamics concepts.
Low-Resolution Face Image Reconstruction Using Multi-Stage FSRCNN to Improve Face Detection and Tracking Accuracy in CCTV Surveillance Tommy, -; Siregar, Rosyidah; Rahman Syahputra, Edy
JOIV : International Journal on Informatics Visualization Vol 9, No 3 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.3.3160

Abstract

Face detection and tracking under real-world condition remain challenging under different illumination, crowded scenes, partial occlusions and small or low-resolution face images. In traditional face tracking schemes, these factors often cause the false positive rate to be high and the accuracy to be low. Specifically, little or no detailed information is presented for small or distant faces, here the reliability of detection is diminished and non-face-object can provoke false alarms thus degrading the performance of a system in general. Such problems are not unclear and need a sophisticated solution to improve the resolution and detection performance in various scenarios. In this paper, a new face tracking system based on a cascade classifier, a two-step model of Fast Super-Resolution Convolutional Neural Network (FSRCNN) and DLib face validator is presented. The low-resolution facial parts are first enhanced by the FSRCNN to optimize the detection by the cascade classifier. The DLib face validator improves the approach by validating the discovered faces, and reducing false positives. The system was tested over a CCTV scenario video corpus of several challenging conditions represented by crowded environments, dynamic object and human faces of different sizes and locations. The performance analysis focused on performance metrics such as precision, recall, and false positive rate, which provided a comprehensive overview of the system's robustness. The results demonstrate a significant improvement in face detection accuracy, as high as 98% precision and very few false positive detections. The synergy between the FSRCNN method and the DLib validation was especially effective on small and far-away faces, which are normally difficult to perceive. Whilst their improvements on memory consumption were small, they proved effective for face detection in challenging conditions. The ability of the system to maintain high measurement accuracy while avoiding errors makes it well suited for use in surveillance, security and monitoring systems. In conclusion, this research highlights the effectiveness of combining super-resolution techniques with traditional face detection methods to address the limitations of existing systems. The future work will focus on increasing recall rate and constantly maturing the extraction system to work well in various realistic conditions, thus making it effective and general for different applications.
Determinants Generating General Purpose Technologies in Economic Systems: A New Method of Analysis and Economic Implications Kargı, Bilal; Coccia, Mario; Uçkaç, Bekir Cihan; Rasyidah, -
JOIV : International Journal on Informatics Visualization Vol 8, No 3-2 (2024): IT for Global Goals: Building a Sustainable Tomorrow
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.3-2.2657

Abstract

This research proposes using the fishbone diagram, a visualization tool for constructing a comprehensive theoretical framework to analyze the sources of innovation. Traditionally employed to identify causes of specific events, the fishbone diagram is applied innovatively to explore the root causes driving the emergence and evolution of General Purpose Technologies (GPTs). The study identifies critical driving forces such as increased democratization, population growth, demographic shifts, significant investments in research and development (R&D), global leadership aspirations among major powers, competitive socioeconomic environments, and potential threats from adversarial actors. By visually representing these drivers, the fishbone diagram offers insights crucial for technological analysis and foresight, illuminating groundbreaking innovations that drive technological and economic progress. Illustrated through examples from historical GPTs like the steam engine and contemporary technologies such as Information and Communication Technologies (ICTs), this study establishes a foundational framework for developing precise hypotheses about the specific causes and socio-economic impacts of GPTs. The fishbone diagram emerges as a versatile tool adept at systematically analyzing the complex root causes associated with GPTs, facilitating foresight and strategic management of these transformative innovations within society.
No-Show Passenger Prediction for Flights Chin, Wei-Song; Ting, Choo-Yee; Cham, Chin-Leei
JOIV : International Journal on Informatics Visualization Vol 7, No 3-2 (2023): Empowering the Future: The Role of Information Technology in Building Resilien
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.3-2.2328

Abstract

In aviation, “no-show” refers to a customer who booked a reservation but failed to show up. No-shows can result in various resource wastes, such as vacant seats, leading to income loss and flight delays. As a result, no-show passengers can cause considerable problems for airlines, ultimately affecting their bottom line. Recent research has shown the use of machine learning algorithms to reduce the rate of no-shows. For example, a researcher in healthcare is using a predictive model to identify no-shows’ patients to increase efficiency. Therefore, this study aimed to develop prediction models to predict passenger no-shows. In this work, we used a dataset supplied by a local airline company consisting of 1,046,486 rows and 8 columns. Additional datasets like weather data, public holiday data of different countries, aircraft details, and foot traffic data are used to carry out the dataset's feature enrichment task to complement the original dataset. As a result, feature selection has become an important stage in this research to identify and pick the most relevant and useful features from the enormous number of columns. The findings showed that the model built using Random Forest has the highest accuracy of 90.4%, while Decision Tree performed at 90.2%, Gradient Boosting at 86.5%, and Neural Networks at 67.6%. To enhance the accuracy of the models, further research efforts are essential to integrate supplementary passenger information.