cover
Contact Name
Rahmat Hidayat
Contact Email
mr.rahmat@gmail.com
Phone
-
Journal Mail Official
rahmat@pnp.ac.id
Editorial Address
-
Location
Kota padang,
Sumatera barat
INDONESIA
JOIV : International Journal on Informatics Visualization
ISSN : 25499610     EISSN : 25499904     DOI : -
Core Subject : Science,
JOIV : International Journal on Informatics Visualization is an international peer-reviewed journal dedicated to interchange for the results of high quality research in all aspect of Computer Science, Computer Engineering, Information Technology and Visualization. The journal publishes state-of-art papers in fundamental theory, experiments and simulation, as well as applications, with a systematic proposed method, sufficient review on previous works, expanded discussion and concise conclusion. As our commitment to the advancement of science and technology, the JOIV follows the open access policy that allows the published articles freely available online without any subscription.
Arjuna Subject : -
Articles 52 Documents
Search results for , issue "Vol 9, No 1 (2025)" : 52 Documents clear
Using Artificial Neural Networks to Forecasting Carbon Dioxide Emissions in Iraq Ahmed, Shaymaa Mohammed; Sheab, Gheada Ibrahim; Hasan, Arshad Hameed; Hanon, Muammel M.
JOIV : International Journal on Informatics Visualization Vol 9, No 1 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.1.2456

Abstract

This paper explores the application of ANN (artificial neural networks) to forecast CO2 emissions in Iraq until 2028. ANNs are able to model non-linear dynamics of time series data which eventually leads to accurate forecasts without any statistical assumption about the features of a dataset. The authors developed a simple single-input feedforward ANN model with the yearly CO2 emission data from 1991 to 2023 as the input to project the future emissions using the year. Levenberg-Marquardt algorithm was used for the network training. The model performed well on the training, validation, and testing datasets with minimal error rates and R-squared values of 1, which implied that the regression demonstrated a good fit between targets and outputs. The performance of ANNs in forecasting was evaluated. The mean squared error (MSE=0.1325) and root mean squared error (RMSE = 0.3641) values were low, highly predictive of small forecasting errors. R2 is quite high (0.946), indicating the model could explain as much as 94.6% of the variances in the actual data. The mean absolute percentage error equalled 8.01%, which signifies a good forecast with less than 10% error. The forecast of 2028 shows per capita emissions reaching 3.649 tons, which may be affected by population growth, economic development, or infrastructure changes that will be put into place. Despite renewables, efficiency, and emissions control or policies the growth curve can be replaced. This model serves as a data-driven instrument for future Iraqi CO2 emissions forecasting in order to develop climate change mitigation policies which are not time series statistical assumptions. It could also be extended to other greenhouse gases and countries, which is possible. This paper shows that ANNs can predict emissions that are accurate and reliable for decision-making which helps to reduce the country's carbon footprint and climate change.
Multi Task Deep Learning with Transformer Encoder Decoder for Semantic Segmentation Indah, Komang Ayu Triana; Darma Putra, I Ketut Gede; Sudarma, Made; Hartati, Rukmi Sari
JOIV : International Journal on Informatics Visualization Vol 9, No 1 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.1.1978

Abstract

Visual understanding is one of the core elements of computer vision consisting of image classification, object detection, and segmentation. The system applies a multilayer process to obtain complex image and video understanding using deep learning methods to convert the images to text. Therefore, this study aimed to extract video in the form of frames followed by the application of Transformer and Inception V3 architectures to the image captioning process. The synchronization was based on Multi-task Deep Learning method developed by combining Convolutional Neural Network (CNN) system in the image area, Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM) in the sentence area, Caption Content Network (CCN), and Relational Network Context (RCN). Moreover, Transformer Encoder-Decoder architecture was used in the process of labeling and determining the relationships between objects. The results of the image-to-text conversion process were determined by comparing prospective translated text with one or more references. This was achieved using accuracy and loss validation tables to provide graphical comparisons between the number of epochs and losses. The test results showed that the validation data accuracy was 70.166% while the loss was 22,648% and this showed more epoch iterations led to greater validation accuracy.Keywords— Visual Understanding, Transformer, Encoder, Decoder
An Improved Hybrid GRU and CNN Models for News Text Classification Khudhair, Inteasar Yaseen; Majeed, Sundus Hatem; Ahmed, Ali Mohammed Saleh; Kadhim Alsaeedi, Mokhalad Abdulameer; Aswad, Firas Mohammed
JOIV : International Journal on Informatics Visualization Vol 9, No 1 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.1.2658

Abstract

 Due to the continuous growth and advancement of technology, an enormous volume of text data is generated daily across various sources including social media platforms, websites, search engines, healthcare records, and news articles. Extracting meaningful patterns from text data, such as viewpoints, related theories, journal distribution, facts, and the development of online news text, is a challenging task due to the varying lengths of the texts. One issue arises from the length of the text data itself, and another challenge lies in extracting valuable features, especially in news articles. In the deep learning models, the convolutional neural networks (CNNs) are capable of capturing local features in text data, but unable to capture the structural information or semantic relationships between words. Consequently, a sole CNN network often yields poor performance in text classification tasks, whereas the Gated Recurrent Unit (GRU) is adept at effectively extracting semantic information and understanding the global structural relationships present in textual data. This paper presents a solution to the problem by introducing a new text classification that integrates the strengths of CNN and GRU. The proposed hybrid models incorporate word vectorization and word dispersion in parallel. Initially, the model trains word vectors using the Word2vec model and then leverages the GRU model to capture semantic information from text sentences. Subsequently, the CNN method is employed to capture crucial semantic features, leading to classification using the SoftMax layer. Experimental findings demonstrated that the proposed hybrid GRU_CNN model outperformed and achieved accuracy 97.73% as compared to individual CNN, LSTM, and GRU models in terms of classification effectiveness and accuracy.
Analyzing Course Selection by MBTI Personality Types Goo, Cui-Ling; Leow, Meng-Chew; Ong, Lee-Yeng
JOIV : International Journal on Informatics Visualization Vol 9, No 1 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.1.2937

Abstract

This research project explores the relationship between course selection and Myers-Briggs Type Indicator (MBTI) personality types. It focuses on a private university’s IT Faculty students pursuing AI, BIA, BIO, DCN, and ST courses. In higher education, there is a limited understanding of the influence of personality types on course selection. This research aims to determine the statistically significant differences between courses with personality profiles. To achieve this, data collected from the survey is systematically analyzed to provide useful insights into the distribution of course selection among various personality types through descriptive analysis and inferential statistics tests, such as the Kruskal-Wallis Test. These assessments help examine the statistically significant difference between courses for each personality profile, supported by a p-value < 0.05. Descriptive analysis shows INFJ typically occurred in every course, showing the wide distribution of this personality type among students. Besides, the result shows INF_ types predominantly appear in median personalities across all courses among the participants. The majority of the participants have INTP personality types. The inferential statistical results show statistically significant differences in the distribution of courses for 8 MBTI personality types, while the remaining MBTI is not statistically significant. The results also show statistically significant differences between courses for each personality dimension. These results can be used to provide suggestions to students on course selection. Future research could expand this study by including a more diverse range of universities and courses and incorporating additional personality assessments.
Toponym Extraction and Disambiguation from Text: A Survey Windiastuti, Rizka; Krisnadhi, Adila Alfa; Budi, Indra
JOIV : International Journal on Informatics Visualization Vol 9, No 1 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.1.2763

Abstract

Toponym is an essential element of geospatial information. Traditionally, toponyms are collected in a gazetteer through field surveys that require significant resources, including labor, time, and money. Nowadays, we can utilize social media and online news portals to collect event locations or toponyms from the text. This article presents a survey of studies that focus on the extraction and disambiguation of toponyms from textual documents. While toponym extraction aims to identify toponyms from the text, toponym disambiguation determines their specific locations on the earth. The survey covered articles published between January 2015 and April 2023, presented in English, and gathered from five major journal databases. The survey was conducted by adopting the Kitchenham guidelines, consisting of an initial article search, article selection, and annotation process to facilitate the reporting phase. We employed Mendeley as a reference management tool and NVivo to categorize certain parts of the articles that are the focal points of interest in this survey. The primary focus of the survey was on the methods or approaches performed in the research articles to extract and disambiguate toponyms. Additionally, we also discuss some general challenges in toponym research, different applications for toponym extraction and disambiguation, data sources, and the use of languages other than English in the studies. The survey confirms that each approach has its limitations. Extracting and disambiguating toponyms from text is complex and challenging, especially for low-resource languages. We also suggest some research directions related to toponym extraction and disambiguation that could enrich the gazetteer.
Social Platforms in the Deepfake Age: Navigating Media Trust through Media Literacy Lee, Fong Yee; Kumaresan, S Prabha; Abdulwahab Anaam, Elham; Chee Kong, Wong
JOIV : International Journal on Informatics Visualization Vol 9, No 1 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.1.3490

Abstract

The issues with social media landscape are proliferation of disinformation, misinformation, and misinformation. The widespread of deepfakes makes is harder to distinguish between authentic content and fabricated content. The mediating effect of media literacy on news credibility has been understudied in previous research; the objective of the study is to investigate how much media literacy, news skepticism and fear of missing out (FOMO) influencing users' trust in the news disseminated on social media platforms. To achieve this, a survey was conducted to assess trust in and skepticism towards social media news, FOMO levels, and media literacy associated with deepfake news content. Educational efforts and media literacy initiatives are crucial in fostering informed and discerning news consumption. Furthermore, news organizations continue to prioritize transparency and accuracy to maintain credibility on social media since the news is easily accessible in the era of an information overload. The limitation of the study was the lack of assessment on evaluating effectiveness of media literacy in combating fabricated news content on social media. It is suggested to broaden scope by studying additional factors to combat fake news such as journalistic standards, fact-checking and verification are important to build reader’s trust. Future studies should also measure the effectiveness of media literacy initiatives ensure they really make a difference. The generalizability of future study can be strengthened with the inclusion of diverse age groups especially vulnerable populations.
Security System for Door Locks Using YOLO-Based Face Recognition Putri, Hasanah; Hadiyoso, Sugondo; Putri Fatoni, Salwa Berliana; Octaviany, Vany; Wulandari, Astri; Aprilina, Riska; Rosmiati, Mia
JOIV : International Journal on Informatics Visualization Vol 9, No 1 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.1.2410

Abstract

Di era kemajuan teknologi dan algoritma canggih yang memudahkan hidup manusia, kunci pintar pengenalan wajah merupakan sistem yang menggunakan salah satu algoritma tersebut dan mengatasi masalah keamanan dalam teknologi rumah pintar. Kunci pintar ini dapat dipasang di dekat pintu untuk memantau rumah, perusahaan, dan universitas. Masalah dengan solusi kunci pintar pengenalan wajah saat ini adalah bahwa kunci pintar tersebut kurang cepat dan tepat. Pintu merupakan salah satu komponen bangunan yang perlu diperhatikan keamanannya untuk mencegah upaya pencurian. Bangunan yang memiliki banyak ruang harus memiliki pintu dengan sistem keamanan yang kuat, salah satunya adalah hotel. Alat yang sering digunakan untuk mengakses kamar hotel adalah RFID. Mobil RFID memiliki banyak kekurangan, antara lain tamu sering meninggalkan kartu RFID mereka di kamar sehingga mereka tidak dapat lagi memasuki kamar dan harus melapor ke resepsionis terlebih dahulu, kartu RFID juga mudah hilang sehingga tamu yang kehilangan kartu RFID akan didenda sebagai biaya penggantian kartu. Oleh karena itu, dibuatlah sistem keamanan pintu menggunakan pengenalan wajah dengan algoritma YOLO. Algoritma YOLO digunakan untuk mendeteksi wajah siapa saja yang ingin mengakses pintu. Hasil pengujiannya adalah sistem dapat mendeteksi wajah dengan tingkat akurasi 94,4%.
Systematic Literature Review on Persuasive System Design Framework for Managing Curriculum Performance Saifunnizam, Syamir Thaqif; Md Fudzee, Mohd Farhan; Hanif Jofri, Muhamad; Kasim, Shahreen; Arrova Dewi, Deshinta; Arshad, Mohamad Safwan; Yulherniwati, -
JOIV : International Journal on Informatics Visualization Vol 9, No 1 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.1.3663

Abstract

Integrating digital resources into educational assessment has led to the widespread adoption of e-portfolios as tools for documenting and evaluating student achievement, thereby transforming traditional evaluation methods. However, the existing frameworks primarily focus on assessing academic performance, often neglecting the comprehensive monitoring of student’s co-curricular activities. To overcome current gaps in comprehensive student evaluation, this study introduces a conceptual framework incorporating persuasive system design (PSD) into an e-portfolio to facilitate efficient co-curricular performance monitoring in Malaysian secondary schools. To ensure a thorough approach to educational evaluation, it is essential to effectively monitor and manage academic and extracurricular performance to understand student progress comprehensively. By adding Physical Activity, Sports, and Co-curriculum Assessment (PAJSK) – specific categories and key PSD elements- primary task support, dialogue support, system credibility support, and social support- that are all designed to improve user engagement and system dependability in an educational environment, the framework builds on the Oinas-Kukkonen and Harijumaa PSD Model. This study adapts and discusses the persuasive design elements to meet the goals of educational assessment frameworks by comparing PSD implementation in e-health, e-tourism, e-commerce, and e-learning. The results offer an overview of developing a practical, engaging e-portfolio framework that facilitates comprehensive student evaluation, especially in educational environments focusing on co-curricular achievement.
Challenges of Agile Software Development in the Banking Sector: A Systematic Literature Review Letelay, Kornelis; Mola, Sebastianus A. S; Go, Ratna Yulika
JOIV : International Journal on Informatics Visualization Vol 9, No 1 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.1.2300

Abstract

The banking industry is expected to thrive, generate profits, and contribute to national development and societal welfare. However, this sector is susceptible to volatility caused by global and domestic economic fluctuations. This research aims to identify and address challenges related explicitly to implementing agile methodologies within the banking sector. The study utilized a Systematic Literature Review (SLR) approach based on the guidelines provided by Kitchenham. A substantial number of academic journals (1,933) were analyzed during this review. Among the vast pool of literature, 28 relevant studies were extracted. These studies were chosen because they provided insights into the challenges of implementing agile practices in the banking domain. The analysis and categorization of these studies were structured according to the Project Management Body of Knowledge (PMBOK) 6th edition framework. This framework was employed to organize and understand the identified challenges systematically. The study's primary finding is that the most prevalent challenge encountered in the context of agile development within the banking sector is "Project Resource Management." In essence, effectively managing and allocating resources is a significant hurdle banks face when adopting agile methodologies. The challenges related to resource management are not confined to a single aspect. Instead, they encompass various dimensions, including human resources, technological resources, and organizational factors. This suggests that challenges in agile banking are multifaceted, involving issues related to people, technology, and the structure and processes within banking organizations.
Face Recognition for Logging in Using Deep Learning for Liveness Detection on Healthcare Kiosks Ryando, Catoer; Sigit, Riyanto; Setiawardhana, Setiawardhana; Sena Bayu Dewantara, Bima
JOIV : International Journal on Informatics Visualization Vol 9, No 1 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.1.2759

Abstract

This study explores the enhancement of healthcare kiosks by integrating facial recognition and liveness detection technologies to address the limitations of healthcare service accessibility for a growing population. Healthcare kiosks increase efficiency, lessen the strain on conventional institutions, and promote accessibility. However, there are issues with conventional authentication methods like passwords and RFID, such as the possibility of them being lost, stolen, or hacked, which raises privacy and data security problems. Although it is more secure, face recognition is susceptible to spoofing attacks. In order to improve security, this study integrates liveness detection with face recognition. Data preparation is done using deep learning algorithms, namely FaceNet and Multi-task Cascaded Convolutional Neural Networks (MTCNN). Real-time authentication of persons is verified by the system, which provides correct identification of them. Techniques for enhancing data help the model become more accurate and robust. The system's usefulness is shown by the outcomes of the experiments. The VGG16 model outperforms alternative designs like MobileNet V2, ResNet-50, and DenseNet-121, achieving 100% accuracy in liveness detection. Face recognition and liveness detection together greatly improve security, which makes it a dependable option for real-world healthcare applications. Through the ability to differentiate between genuine and fake faces and foil spoofing efforts, facial liveness detection may boost security. This study offers insights into building biometric systems for safe and effective identity verification in the healthcare industry.