cover
Contact Name
Rahmat Hidayat
Contact Email
mr.rahmat@gmail.com
Phone
-
Journal Mail Official
rahmat@pnp.ac.id
Editorial Address
-
Location
Kota padang,
Sumatera barat
INDONESIA
JOIV : International Journal on Informatics Visualization
ISSN : 25499610     EISSN : 25499904     DOI : -
Core Subject : Science,
JOIV : International Journal on Informatics Visualization is an international peer-reviewed journal dedicated to interchange for the results of high quality research in all aspect of Computer Science, Computer Engineering, Information Technology and Visualization. The journal publishes state-of-art papers in fundamental theory, experiments and simulation, as well as applications, with a systematic proposed method, sufficient review on previous works, expanded discussion and concise conclusion. As our commitment to the advancement of science and technology, the JOIV follows the open access policy that allows the published articles freely available online without any subscription.
Arjuna Subject : -
Articles 1,172 Documents
Data-Driven User Personas in Requirement Engineering with NLP and Behavior Analysis Liang, He; Muhammad, Sufri; Zainudin, M.N. Shah
JOIV : International Journal on Informatics Visualization Vol 8, No 4 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.4.3625

Abstract

As technology rapidly evolves, software development faces growing complexity, requiring adaptation to dynamic user expectations. This study addresses a critical gap in the existing literature by integrating behavioral data and sentiment analysis into the user persona development process within the requirement engineering framework. The primary objective is to create more accurate and representative user personas that better guide software design and development. To achieve this, the research employs advanced Natural Language Processing (NLP) techniques to systematically analyze extensive behavioral and sentiment data collected from social media platforms. The integration process involves segmenting user data into behavioral patterns and emotional states, which are then synthesized to develop nuanced user personas. These personas are expected to significantly improve the accuracy of user requirements, leading to enhanced software performance, increased user satisfaction, and greater development efficiency. The target application area for this research is mobile telecommunications, where precise user understanding is critical. The results indicate that this approach not only refines the traditional persona method but also addresses the evolving needs of users more holistically. By advancing the methodology for user-centered design, this study contributes to the broader field of requirement engineering. Future research will validate and refine this approach across diverse domains, ensuring its adaptability and effectiveness in different contexts. This paper thus has the potential to make a significant impact on how user personas are developed and utilized in software engineering.
Toponym Extraction and Disambiguation from Text: A Survey Windiastuti, Rizka; Krisnadhi, Adila Alfa; Budi, Indra
JOIV : International Journal on Informatics Visualization Vol 9, No 1 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.1.2763

Abstract

Toponym is an essential element of geospatial information. Traditionally, toponyms are collected in a gazetteer through field surveys that require significant resources, including labor, time, and money. Nowadays, we can utilize social media and online news portals to collect event locations or toponyms from the text. This article presents a survey of studies that focus on the extraction and disambiguation of toponyms from textual documents. While toponym extraction aims to identify toponyms from the text, toponym disambiguation determines their specific locations on the earth. The survey covered articles published between January 2015 and April 2023, presented in English, and gathered from five major journal databases. The survey was conducted by adopting the Kitchenham guidelines, consisting of an initial article search, article selection, and annotation process to facilitate the reporting phase. We employed Mendeley as a reference management tool and NVivo to categorize certain parts of the articles that are the focal points of interest in this survey. The primary focus of the survey was on the methods or approaches performed in the research articles to extract and disambiguate toponyms. Additionally, we also discuss some general challenges in toponym research, different applications for toponym extraction and disambiguation, data sources, and the use of languages other than English in the studies. The survey confirms that each approach has its limitations. Extracting and disambiguating toponyms from text is complex and challenging, especially for low-resource languages. We also suggest some research directions related to toponym extraction and disambiguation that could enrich the gazetteer.
A Model for Enhancing Pattern Recognition in Clinical Narrative Datasets through Text-Based Feature Selection and SHAP Technique Dalhatu, Sirajo Muhammad; Azmi Murad, Masrah Azrifah
JOIV : International Journal on Informatics Visualization Vol 8, No 4 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.4.3664

Abstract

Clinical narratives contain crucial patient information for predicting cardiac failure. Accurate and timely cardiac failure recognition (CFR) significantly impacts patient outcomes but faces challenges like limited dataset sizes, feature space sparsity, and underutilization of vital sign data. This study addresses these issues by developing a methodology to improve CFR accuracy and interpretability within clinical narratives. Four datasets—the Framingham Heart Study, Heart Disease from Kaggle, Cleveland Heart Disease, and Heart Failure Clinical Records—undergo preprocessing, including handling missing values, removing duplicates, scaling, encoding categorical variables, and transforming unstructured data using natural language processing (NLP). Various feature selection methods (Chi-Squared, Forward Selection, L1 Regularization) are used to identify influential features for CFR, and the SHapley Additive exPlanations (SHAP) technique is integrated to improve interpretability. Support Vector Machine (SVM), Logistic Regression (LR), and Random Forest (RF) models are trained and evaluated. Performance was evaluated using accuracy, precision, recall, f1-score, and area under the receiver operating characteristic curve (AUC-ROC). Results indicate that L1 Regularization with LR and Chi-Squared with RF perform best for specific datasets. The final model, combining all datasets with Forward Selection and RF, achieves high accuracy (91%), precision (87%), recall (97%), f1-score (91%), and AUC-ROC (94%). This study concludes that advanced text-based feature selection and SHAP interpretability significantly enhance CFR model accuracy and transparency, aiding clinical decision-making. Future research should incorporate more diverse datasets, explore advanced NLP techniques, and validate models in various clinical settings to enhance robustness and applicability.
Recipient Feasibility Decision Support System Micro Small Medium Business Assistance Use Method Analytic Hierarchy Process and Simple Additives Weighting Abdullah, Dahlan; Erliana, Cut Ita; Bintoro, Andik; Hartono, Hartono; Ikhwani, Muhammad; Nazaruddin, Nazaruddin
JOIV : International Journal on Informatics Visualization Vol 8, No 4 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.4.2321

Abstract

Study This aims To determine the eligibility of MSME assistance recipients with the method AHP (Analytic Hierarchy Process) And SAW (Simple Additives weighting). The AHP method is used to determine the weight of each criterion. Meanwhile, SAW is used To determine the rank selection of beneficiaries. This is very important for Indonesia's economy during the crisis, Where MSME's own Power stands to face a crisis economy. Criteria used in a way This uses six measures: type of business, Amount of power Work, turnover per month, amount of assets, sector MSME, And sector business. Decision support systems are designed to support someone who must make certain decisions. That is, interactive, Flexible, Data quality, and Expert Procedure. Study System Supporters Decision Appropriateness Recipient Help Business Micro Community Use Analytic Hierarchy Process (AHP) and Simple Additive Methods weighting (SAW), Study This done in Subdistrict Intersection Three Regency Pidie Aceh Province to facilitate the Selection of Eligibility of Government Assistance Recipients For Build a business Micro Society. Testing is done in this study, namely black box testing. Results Testing black box shows that the system can walk with Good by function, with results calculation method AHP and results calculation method SAW in determining eligibility selection MSME aid recipients. The results of the level of accuracy testing on the AHP and SAW methods with six criteria and alternatives the requirements is 75%.
Classification of Coral Images Using Support Vector Machine with Gray Level Co-Occurrence Matrix Feature Extraction Nababan, Adi Pandu Rahmat; Haryanto, Toto; Wijaya, Sony Hartono
JOIV : International Journal on Informatics Visualization Vol 9, No 3 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.3.2708

Abstract

This research developed a coral image classification method using Support Vector Machine (SVM) with Gray Level Co-occurrence Matrix (GLCM) feature extraction to improve the accuracy of coral reef condition monitoring. Coral images were collected in the waters of Sangihe Islands Regency and labelled by experts for healthy, unhealthy, and dead categories. Preprocessing included cropping, background removal, sharpening, and image normalization. GLCM feature extraction was performed with a distance of 1, 2, and 3 pixels and directions of 0°, 45°, 90°, and 135°. SVM uses Linear, Radial Basis Function, and Polynomial kernels with parameters set in a grid. The results indicate that the polynomial kernel with parameters C=10, degree=3, and gamma=1 achieves the highest accuracy, at 91.85%. Oversampling increased the accuracy by 2.17%, while feature selection by boxplot and model-based decreased the accuracy by 0.8% and 0.2%, respectively. On the other hand, feature selection using correlation analysis significantly decreased accuracy by 16.11%. These findings significantly contribute to coral reef conservation by offering a more accurate and efficient classification method. This method enables better and timely monitoring of coral reef conditions, thus supporting more effective conservation interventions. Integrating these research results into IoT systems can improve overall coral reef monitoring and conservation efforts.
Issues in Chinese Requirements Specifications: Insights from Survey Data and Static Analysis Jiaying, He; Yap, Ng Keng; Osman, Mohd Hafeez; Hassan, Sa’adah
JOIV : International Journal on Informatics Visualization Vol 8, No 4 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.4.3667

Abstract

Requirements engineering is crucial for software project success. Issues like requirements ambiguity, inconsistency, and unverifiability contribute to unclear, conflicting, or untestable specifications, which can undermine the effectiveness and success of a software project. These issues have been identified as factors contributing to software project failure. However, there’s limited research on the current state of these issues in China. The research objectives of this study are to address the most commonly used methods for expressing Chinese software requirements and uncover issues related to ambiguity, inconsistency, and unverifiability, which can be solved by using artificial intelligence techniques to investigate possible solutions to these problems. An online survey of 422 software professionals in China identifies key issues in Chinese software requirement expressions that AI techniques can address. The study examines various expression methods, tools for enhancing clarity, and challenges specific to Chinese requirements. Findings reveal that ambiguity, inconsistency, and unverifiability significantly impact project success. While natural language specification and prototyping improve clarity, they may increase the time required for requirements engineering. Effective communication is typically achieved through natural language, prototyping, storyboarding, and pseudo-coding, whereas decision tables and block diagrams are less commonly used and linked to problematic requirements. Using tables, prototype diagrams, and natural language descriptions helps mitigate these issues, though it may extend engineering time. The study suggests strategies to improve the efficiency and quality of requirements expression and highlights the need to develop Chinese boilerplates and refining tools to enhance clarity in the future.
Comparison of Convolutional Neural Networks Transfer Learning Models for Disease Classification of Food Crop Faurina, Ruvita; Rahma, Silvia; Vatresia, Arie; Susanto, Agus
JOIV : International Journal on Informatics Visualization Vol 8, No 4 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.4.1936

Abstract

Indonesia is an agricultural country with 29% of the workforce working in the agricultural sector, however, farmers' knowledge and practices depend on informal local wisdom based on inherited past practices. Moreover, identifying diseases in plants is difficult to do with human vision so that intelligent technology is needed.  In this paper, an architecture of CNN models such as MobileNetV2, ResNetV50, InceptionV3 and DenseNet121 will be built to detect diseases based on leaf images of several crops obtained from the agroai dataset containing multiple crops namely bean, chili, corn, potato, tomato and tea. The model is used through transfer learning for feature extraction of the trained model with imagenet weights, with 4 fully connected layers. Each model for each crop will be compared to get the best model based on the accuracy of training, evaluation and testing. ResNet50 has the best performance for four type of plants, including bean plants with training accuracy of 99.49%, validation of 99.52%, testing of 98.96%, chili plants with training accuracy of 98.03%, evaluation of 98.75%, testing of 100%, tea plants with training accuracy of 99.62%, evaluation of 99.6%, testing of 99.74% and tomato plants with training accuracy of 99.62%, validation of 99.7%, testing of 99.37%. Moreover, MobileNetV3 has the best performance for 2 types of crops that is corn with training accuracy of 99.22%, validation of 99.69%, testing of 99.55%, and potato with training accuracy of 99.62%, evaluation of 99.60%, testing of 99.74%.
Social Platforms in the Deepfake Age: Navigating Media Trust through Media Literacy Lee, Fong Yee; Kumaresan, S Prabha; Abdulwahab Anaam, Elham; Chee Kong, Wong
JOIV : International Journal on Informatics Visualization Vol 9, No 1 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.1.3490

Abstract

The issues with social media landscape are proliferation of disinformation, misinformation, and misinformation. The widespread of deepfakes makes is harder to distinguish between authentic content and fabricated content. The mediating effect of media literacy on news credibility has been understudied in previous research; the objective of the study is to investigate how much media literacy, news skepticism and fear of missing out (FOMO) influencing users' trust in the news disseminated on social media platforms. To achieve this, a survey was conducted to assess trust in and skepticism towards social media news, FOMO levels, and media literacy associated with deepfake news content. Educational efforts and media literacy initiatives are crucial in fostering informed and discerning news consumption. Furthermore, news organizations continue to prioritize transparency and accuracy to maintain credibility on social media since the news is easily accessible in the era of an information overload. The limitation of the study was the lack of assessment on evaluating effectiveness of media literacy in combating fabricated news content on social media. It is suggested to broaden scope by studying additional factors to combat fake news such as journalistic standards, fact-checking and verification are important to build reader’s trust. Future studies should also measure the effectiveness of media literacy initiatives ensure they really make a difference. The generalizability of future study can be strengthened with the inclusion of diverse age groups especially vulnerable populations.
Involvement of Various Selection Methods for Genetic Algorithms in Determining the Optimal Production Schedule Problem Muliono, Rizki; Silviana, Nukhe Andri; Novita, Nanda
JOIV : International Journal on Informatics Visualization Vol 8, No 4 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.4.2632

Abstract

This research investigates using genetic algorithms (GA) to optimize production scheduling in Medan's shoe industry. The study compares traditional manual and First Come First Serve (FCFS) methods against a GA approach, incorporating selection variations such as Boltzmann, Fitness Uniform Selection Scheme (FUSS), Exponential Rank Selection, and Roulette Wheel Selection. The optimal production order is derived from the chromosome with the highest fitness. Results indicate that GA with FUSS selection significantly reduces production time from 73,630 minutes to 45,650 minutes, achieving a 35% improvement in efficiency. This optimization is attributed to FUSS’s ability to maintain a diverse population, preventing premature convergence and ensuring a broader solution for space exploration. Additionally, it was found that using a smaller population size relative to the number of generations yields better optimization results. The study also demonstrates that while Roulette Wheel Selection shows more variability, it achieves higher optimization over time than FCFS. The practical implications of these findings are substantial for the shoe industry, including faster production cycles, better resource allocation, and an enhanced ability to meet customer demands. These benefits are exemplified by implementing the SISPROMA application, an innovative production scheduling information system that leverages machine learning to optimize scheduling in the manufacturing industry. This study provides valuable insights into applying genetic algorithms for production scheduling, highlighting their potential to enhance operational efficiency and reduce costs. Future research should explore additional optimization techniques and real-world applications to validate and extend these findings, ensuring broader applicability and continuous improvements in manufacturing efficiency.
Diagnosis of Diseases in Rubber Stems Using the Dempster Shafer Method Sukmono, Yudi; Pratiwi, Sinthya Ayu; Hatta, Heliza Rahmania; Septiarini, Anindita; Padmo Azam Masa, Amin; Wijayanti, Arini
JOIV : International Journal on Informatics Visualization Vol 8, No 4 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.4.3474

Abstract

Rubber (Hevea Brasiliensis) is a non-timber forest product originating from the Americas and is currently widely distributed worldwide, including in East Kalimantan, Indonesia. In their management in East Kalimantan, farmers often encounter diseases in rubber plants, especially diseases of the stems, which can cause plant death. This disease requires treatment, but if it is too severe, it can harm farmers economically and in production, so it is essential for farmers to recognize the symptoms of this disease early from changes in the rubber plant stems. This study aims to diagnose diseases of rubber stems using the Dempster Shafer method. Dempster Shafer is a relevant method for overcoming the uncertainty of symptoms and rules, enabling expert systems to generate conclusions with certainty. This method has advantages in solving various problems and simultaneously combining evidence (facts) from several sources. This research was conducted by analyzing a dataset of 80 data, covering 7 types of diseases and 27 different symptoms. The accuracy test results show that the research has an accuracy rate of 96.25%. The implications of this research are significant. It is hoped that it can significantly help rubber plantation farmers in East Kalimantan and also make a valuable contribution to agricultural and plantation extension agents in overcoming the challenges faced due to diseases in rubber plant stems. Thus, this research could increase the productivity and sustainability of the rubber plantation sector in this region.

Page 82 of 118 | Total Record : 1172