cover
Contact Name
Rahmat Hidayat
Contact Email
mr.rahmat@gmail.com
Phone
-
Journal Mail Official
rahmat@pnp.ac.id
Editorial Address
-
Location
Kota padang,
Sumatera barat
INDONESIA
JOIV : International Journal on Informatics Visualization
ISSN : 25499610     EISSN : 25499904     DOI : -
Core Subject : Science,
JOIV : International Journal on Informatics Visualization is an international peer-reviewed journal dedicated to interchange for the results of high quality research in all aspect of Computer Science, Computer Engineering, Information Technology and Visualization. The journal publishes state-of-art papers in fundamental theory, experiments and simulation, as well as applications, with a systematic proposed method, sufficient review on previous works, expanded discussion and concise conclusion. As our commitment to the advancement of science and technology, the JOIV follows the open access policy that allows the published articles freely available online without any subscription.
Arjuna Subject : -
Articles 54 Documents
Search results for , issue "Vol 7, No 4 (2023)" : 54 Documents clear
Automated Staging of Diabetic Retinopathy Using Convolutional Support Vector Machine (CSVM) Based on Fundus Image Data Novitasari, Dian C Rini; Fatmawati, Fatmawati; Hendradi, Rimuljo; Nariswari, Rinda; Saputra, Rizal Amegia
JOIV : International Journal on Informatics Visualization Vol 7, No 4 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.7.4.1501

Abstract

Diabetic Retinopathy (DR) is a complication of diabetes mellitus, which attacks the eyes and often leads to blindness. The number of DR patients is significantly increasing because some people with diabetes are not aware that they have been affected by complications due to chronic diabetes. Some patients complain that the diagnostic process takes a long time and is expensive. So, it is necessary to do early detection automatically using Computer-Aided Diagnosis (CAD). The DR classification process based on these several classes has several steps: preprocessing and classification. Preprocessing consists of resizing and augmenting data, while in the classification process, CSVM method is used. The CSVM method is a combination of CNN and SVM methods so that the feature extraction and classification processes become a single unit. In the CSVM process, the first stage is extracting convolutional features using the existing architecture on CNN. CSVM could overcome the shortcomings of CNN in terms of training time. CSVM succeeded in accelerating the learning process and did not reduce the accuracy of CNN's results in 2 class, 3 class, and 5 class experiments. The best result achieved was at 2 class classification using CSVM with data augmentation which had an accuracy of 98.76% with a time of 8 seconds. On the contrary, CNN with data augmentation only obtained an accuracy of 86.15% with a time of 810 minutes 14 seconds. It can be concluded that CSVM was faster than CNN, and the accuracy obtained was also better to classify DR.
Comparative Analysis of VGG-16 and ResNet-50 for Occluded Ear Recognition Tey, Hua-Chian; Chong, Lee Ying; Chong, Siew-Chin
JOIV : International Journal on Informatics Visualization Vol 7, No 4 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.7.4.2276

Abstract

Occluded ear recognition is a challenging task in biometric systems due to the presence of occlusions that can hinder accurate identification. There is still a research gap in enhancing the robustness of deep learning to handle severities of occlusions with different datasets. This research focuses on developing a robust occluded ear recognition system by implementing fine-tuning techniques on three popular pre-trained deep learning models, Residual Neural Network (ResNet-50), Visual Geometry Group (VGG-16), and EfficientNet. The system is evaluated on two manually occluded ear datasets, which are the AMI ear dataset and the IITD ear dataset. The experiment results showed the fine-tuned ResNet-50 model performs better than the fine-tuned VGG-16 model. The results indicate that the model's ability to accurately predict the classes or labels decreases as more data is occluded. Higher occlusion rates lead to a loss of important information, making it more challenging for the model to distinguish between different patterns and make accurate predictions. According to the findings, the amount of occlusion influenced the identification accuracy and worsened as the occlusion became larger. In the future, ear recognition systems will likely continue to improve in accuracy and be adopted by a wider range of organizations and industries. They may also be integrated with other biometric technologies and used for personalization purposes. However, ethical considerations related to the use of ear recognition systems will also need to be addressed.
Performance Improvement of Deep Convolutional Networks for Aerial Imagery Segmentation of Natural Disaster-Affected Areas Nugraha, Deny Wiria; Ilham, Amil Ahmad; Achmad, Andani; Arief, Ardiaty
JOIV : International Journal on Informatics Visualization Vol 7, No 4 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.7.4.1383

Abstract

This study proposes a framework for improving performance and exploring the application of Deep Convolutional Networks (DCN) using the best parameters and criteria to accurately produce aerial imagery semantic segmentation of natural disaster-affected areas. This study utilizes two models: U-Net and Pyramid Scene Parsing Network (PSPNet). Extensive study results show that the Grid Search algorithm can improve the performance of the two models used, whereas previous research has not used the Grid Search algorithm to improve performance in aerial imagery segmentation of natural disaster-affected areas. The Grid Search algorithm performs parameter tuning on DCN, data augmentation criteria tuning, and dataset criteria tuning for pre-training. The most optimal DCN model is shown by PSPNet (152) (bpc), using the best parameters and criteria, with a mean Intersection over Union (mIoU) of 83.34%, a significant mIoU increase of 43.09% compared to using only the default parameters and criteria (baselines). The validation results using the k-fold cross-validation method on the most optimal DCN model produced an average accuracy of 99.04%. PSPNet(152) (bpc) can detect and identify various objects with irregular shapes and sizes, can detect and identify various important objects affected by natural disasters such as flooded buildings and roads, and can detect and identify objects with small shapes such as vehicles and pools, which are the most challenging task for semantic segmentation network models. This study also shows that increasing the network layers in the PSPNet-(18, 34, 50, 101, 152) model, which uses the best parameters and criteria, improves the model's performance. The results of this study indicate the need to utilize a special dataset from aerial imagery originating from the Unmanned Aerial Vehicle (UAV) during the pre-training stage for transfer learning to improve DCN performance for further research.
Stock Price Movement Classification Using Ensembled Model of Long Short-Term Memory (LSTM) and Random Forest (RF) Gunawan, Albertus Emilio Kurniajaya; Wibowo, Antoni
JOIV : International Journal on Informatics Visualization Vol 7, No 4 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.7.4.1640

Abstract

Stock investing is known worldwide as a passive income available for everyone. To increase the profit possibly gained, many researchers and investors brainstorm to gain a strategy with the most profit. Machine learning and deep learning are two of these approaches to predicting the stock's movement and deciding the strategy to gain as much as possible. To reach this goal, the researcher experiments with Random Forest (RF) and Long Short-Term Memory (LSTM) by trying them individually and merging them into an ensembled model. The researcher used RF to classify the results from LSTM models obtained throughout the Hyperparameter Optimization (HPO) process. This idea is implemented to lessen the time needed to train and optimize each LSTM model inside the ensembled model. Another anticipation done in this research to overcome the time needed to train the model is classifying the return for longer periods. The dataset used in this model is 45 stocks listed in LQ45 as of August 2021 This research results in showing that LSTM gives better results than RF model especially when using Bayesian Optimization as the HPO method, and that the ensembled model can return better precision in predicting stocks in comparison to the LSTM model itself. Future improvement can focus on the model structure, additional model types as the ensemble model estimator, improvement on the model efficiency, and datasets research to be used in predicting the stock movement prediction
Digital Literacy toward Historical Knowledge: Implementation of the Bukittinggi City History Website as an Educational Technology Fatimah, Siti; Hidayat, Hendra; Sulistiyono, Singgih Tri; Alhadi, Zikri; Firza, Firza
JOIV : International Journal on Informatics Visualization Vol 7, No 4 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.7.4.2224

Abstract

This research aims to identify the challenges and opportunities in integrating digital literacy skills into history education. This research focuses on understanding educators' difficulties in integrating digital technologies into the traditional history curriculum. Educational technology as a means and facility to support education and learning is no exception for historical knowledge through access to historical websites. This study analyses digital literacy toward historical knowledge using the Bukittinggi City history website. This research is quantitative research with a survey approach with closed-ended questions. The research population is the millennial generation in Indonesia. Samples were taken with a non-probability sampling approach with purposive sampling. This study involved 831 respondents spread throughout Indonesia. The data analysis technique is partial Least Square Structural Equation Modelling (PLS-SEM). The results showed no difference in historical knowledge scores between males and girls. With a value of 0.697 and a 69.7% variance, the coefficient of determination (R2) result demonstrates significant volatility in historical knowledge. Additionally, Q2's value serves as a gauge for the model's predictive usefulness. The predictive relevance of the model's independent variables was assessed using the predictive relevance test (Q2). Men might be more adept at using online resources to broaden their knowledge of the city's past. Understanding the disparities in digital literacy between men and women will significantly impact the design of educational and literacy programs in Bukittinggi. Enhancing digital literacy can promote access to and understanding of the city's history, especially among women
Improvement of Starling Image Classification with Gabor and Wavelet Based on Artificial Neural Network Rahman, Aviv Yuniar; Istiadi, Istiadi; Hananto, April Lia; Fauzi, Ahmad
JOIV : International Journal on Informatics Visualization Vol 7, No 4 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.7.4.1381

Abstract

Indonesia is a country that has a diversity of animal species with the top 10 predicate in the world. The population of animal species, including starlings, is very widely known in the country. Starlings currently in Indonesia are diverse, ranging from standard to rare in Indonesia. This starling has its characteristics based on the type, color, sound, etc. In the first problem, the first accuracy performance when using the GLCM texture feature with Artificial Neural Network is 68%. Furthermore, the second problem is the accuracy performance of typing using the GLCM texture feature with a Decision Tree of 50%. This research aims to improve the starling classification system accuracy using Gabor and Wavelet texture features with artificial Neural Networks. Based on testing in the classification of starlings using the GLCM, Gabor, and Wavelet features, the highest degree of precision can, therefore, be concluded to be at the GLCM and Wavelet feature levels. The GLCM and Wavelet level accuracy results reached 83% at a rate of learning 0.9. In the experiments that have been done, the GLCM and Wavelet levels can increase accuracy using Artificial Neural Networks. In the classification process, the type of starlings also shows that the computational time in testing is much faster in producing accuracy values. In addition, the accurate accuracy while testing the starling category also increases.
Software Quality Measurement for Functional Suitability, Performance Efficiency, and Reliability Characteristics Using Analytical Hierarchy Process Sarwosri, Sarwosri; Rochimah, Siti; Laili Yuhana, Umi; Balqis Hidayat, Sultana
JOIV : International Journal on Informatics Visualization Vol 7, No 4 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.7.4.2441

Abstract

The quality model used in this paper is ISO 25010. Functional Suitability, Performance Efficiency, and Reliability are the characteristics to be used. The case study used is the ITS Academic Information System, and the method used for the basis of calculation is the AHP (Analytical Hierarchy Process) method. The initial stage is to make a list of questionnaire questions, which are then filled out by three stakeholders: experts, students, and developers. With the AHP method, experts will analyze the questionnaire results to determine the required weight. This weight is used to calculate the quality of the software. There are two types of software measurements: student questionnaires and developer questionnaires. These two questionnaires become data input. Automatic measurements are carried out on Time Behavior aspects, namely Response Time Testing. In the automatic measurement stage, the URL to be tested by the tester is used as data input. From this automatic measurement, we experimented with the response time of the destination URL to respond to requests and conversion results on a scale of one hundred. The final value of these two types of measurements will be used in several equations to get the final value of the quality of the software. The study results are in the form of automatic measuring instruments of software quality. The measurement results can be used as feedback in making improvements so that the quality value increases when measured. Regarding Functional Suitability, the ITS Academic Information System has provided features according to user needs. In the aspect of Performance Efficiency, the ITS Academic Information System can provide performance and performance according to user needs. Meanwhile, regarding reliability, the ITS Academic Information System can carry out a function under certain conditions and times
Mel Frequency Cepstral Coefficients (MFCC) Method and Multiple Adaline Neural Network Model for Speaker Identification Sasongko, Sudi Mariyanto Al; Tsaury, Shofian; Ariessaputra, Suthami; Ch, Syafaruddin
JOIV : International Journal on Informatics Visualization Vol 7, No 4 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.7.4.1376

Abstract

Speech recognition technology makes human contact with the computer more accessible. There are two phases in the speaker recognition process: capturing or extracting voice features and identifying the speaker's voice pattern based on the voice characteristics of each speaker. Speakers consist of men and women. Their voices are recorded and stored in a computer database. Mel Frequency Cepstrum Coefficients (MFCC) are used at the voice extraction stage with a characteristic coefficient of 13. MFCC is based on variations in the response of the human ear's critical range to frequencies (linear and logarithmic). The sound frame is converted to Mel frequency and processed with several triangular filters to get the cepstrum coefficient. Meanwhile, at the speech pattern recognition stage, the speaker uses an artificial neural network (ANN) Madaline model (many Adaline/ which is the plural form of Adaline) to compare the test sound characteristics. The training voice's features have been inputted as training data. The Madaline Neural Network training is BFGS Quasi-Newton Backpropagation with a goal parameter of 0,0001. The results obtained from the study prove that the Madaline model of artificial neural networks is not recommended for identification research. The results showed that the database's speech recognition rate reached 61% for ten tests. The test outside the database was rejected by only 14%, and 84% refused testing outside the database with different words from the training data. The results of this model can be used as a reference for creating an Android-based real-time system.
Literature Reviews of RBV and KBV Theories Reimagined - A Technological Approach Using Text Analysis and Power BI Visualization Arief, Ikhwan; Hasan, Alizar; Putri, Nilda Tri; Rahman, Hafiz
JOIV : International Journal on Informatics Visualization Vol 7, No 4 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.7.4.1940

Abstract

Over the years, the Resource-Based View (RBV) and Knowledge-Based View (KBV) have solidified their roles as pivotal paradigms in strategic management literature. With an emphasis on Small and Medium Enterprises (SMEs), this study uses text analysis and Microsoft Power BI to explore these concepts innovatively. The study implements a systematic literature review, extracting data from Scopus, Web of Science, and DOAJ databases to assemble a comprehensive literature corpus. The methodology incorporates text analysis to draw out key themes, relationships, and trends, and these are subsequently visualized using Power BI to create an engaging, interactive representation of data. Components like word clouds, co-occurrence networks, and trend lines are generated, while Power BI's dynamic filtering and drill-down functionalities facilitate thorough data investigation. The results display significant overlap between RBV and KBV, denoting possible integration junctures for these theories within the domain of strategic management. Additionally, the study underscores the relevance of these insights for SMEs, emphasizing the part played by unique resources, encompassing knowledge assets, in catalyzing innovation and fostering a competitive edge. The study concludes by recognizing the significant theoretical and practical implications of integrating text analysis and Power BI in conducting literature reviews. This methodology bolsters our understanding of RBV and KBV, offering small and medium-sized enterprises a beneficial instrument to traverse these intricate theories. The study suggests that future research could broaden the application of this methodological approach to encompass other strategic management theories.
Comparison of the Packet Wavelet Transform Method for Medical Image Compression Atmaja, I Made Ari Dwi Suta; Triadi, Wilfridus Bambang; Astawa, I Nyoman Gede Arya; Radhitya, Made Leo
JOIV : International Journal on Informatics Visualization Vol 7, No 4 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.7.4.1732

Abstract

Medical images are often used for educational, analytical, and medical diagnostic purposes. Medical image data requires large amounts of storage on computers. Three types of codecs, namely Haar, Daubechies, and Biorthogonal, were used in this study. This study aims to find the best wavelet method of the three tested wavelet methods (Haar, Daubechies, and Biorthogonal). This study uses medical images representing USG and CT-scan images as testing data. The first test is carried out by comparing the threshold ratio. Three threshold values are used, namely 30, 40, and 50. The second test looks for PSNR values with different thresholds. The third test looks for a comparison of the rate (image size) to the PSSR value. The final test is to find each medical image's compression and decompression times. The first compression ratio test results on both medical images showed that CT scan images on Haar and Biorthogonal wavelets were the best, with an average compression ratio of 40.76% and a PSNR of 33.77. The PSNR obtained is also getting more significant for testing with a larger image size. The average compression time is 0.52 seconds, and the decompression time is 2.27 seconds. Based on the test results, this study recommends that the Daubechies wavelet method is very good for compression, which is 0.51 seconds, and the Biorthogonal wavelet method is very good for medical image decompression, which is 1.69 seconds.