Claim Missing Document
Check
Articles

Found 5 Documents
Search
Journal : JOIV : International Journal on Informatics Visualization

Roboswab: A Covid-19 Thermal Imaging Detector Based on Oral and Facial Temperatures I Nyoman Gede Arya Astawa; I.D.G Ary Subagia; Felipe P. Vista IV; IGAK Cathur Adhi; I Made Ari Dwi Suta Atmaja
JOIV : International Journal on Informatics Visualization Vol 7, No 1 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.1.1505

Abstract

The SARS-CoV-2 virus has been the precursor of the coronavirus disease (COVID-19). The symptoms of COVID-19 begin with the common cold and then become very severe, such as those of Middle East Respiratory Syndrome (MERS) and Severe Acute Respiratory Syndrome (SARS). Currently, polymerase chain reaction (PCR) is used to detect COVID-19 accurately, but it causes some side effects to the patient when the test is performed. Therefore, the proposed "Roboswab" was developed that uses thermal imaging to measure non-contact facial and oral temperature. This study focuses on the performance of the proposed equipment in measuring facial and oral temperature from various distances. Face detection also involves checking whether the subject is wearing a mask or not. Image processing methods with thermal imaging and robotic manipulators are integrated into a contact-free detector that is inexpensive, accurate, and painless. This research has successfully detected masked or non-masked faces and accurately detected facial temperature. The results showed that the accurate measurement of facial temperature with a mask is 90% with an error of +/- 0.05%, while it was 100% without a mask. On the other hand, the oral temperature was measured with 97% accuracy and an error of less than 5%. The optimal distance of the Roboswab to the face for measuring temperature is an average of 60 cm. The Roboswab tool equipped with masked or non-masked face detection can be used for early detection of COVID-19 without direct contact with patients.
Comparison of the Packet Wavelet Transform Method for Medical Image Compression Atmaja, I Made Ari Dwi Suta; Triadi, Wilfridus Bambang; Astawa, I Nyoman Gede Arya; Radhitya, Made Leo
JOIV : International Journal on Informatics Visualization Vol 7, No 4 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.7.4.1732

Abstract

Medical images are often used for educational, analytical, and medical diagnostic purposes. Medical image data requires large amounts of storage on computers. Three types of codecs, namely Haar, Daubechies, and Biorthogonal, were used in this study. This study aims to find the best wavelet method of the three tested wavelet methods (Haar, Daubechies, and Biorthogonal). This study uses medical images representing USG and CT-scan images as testing data. The first test is carried out by comparing the threshold ratio. Three threshold values are used, namely 30, 40, and 50. The second test looks for PSNR values with different thresholds. The third test looks for a comparison of the rate (image size) to the PSSR value. The final test is to find each medical image's compression and decompression times. The first compression ratio test results on both medical images showed that CT scan images on Haar and Biorthogonal wavelets were the best, with an average compression ratio of 40.76% and a PSNR of 33.77. The PSNR obtained is also getting more significant for testing with a larger image size. The average compression time is 0.52 seconds, and the decompression time is 2.27 seconds. Based on the test results, this study recommends that the Daubechies wavelet method is very good for compression, which is 0.51 seconds, and the Biorthogonal wavelet method is very good for medical image decompression, which is 1.69 seconds.
Ontology Modeling for Subak Knowledge Management System Hariyanti, Ni Kadek Dessy; Linawati, Linawati; Oka Widyantara, I Made; Sukadarmika, Gede; Arya Astawa, I Nyoman Gede; Kamarudin, Nur Diyana
JOIV : International Journal on Informatics Visualization Vol 9, No 2 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.2.3386

Abstract

Subak, as a Balinese traditional agricultural organization, has knowledge of cultural heritage, including both explicit and tacit elements. This research aimed to develop ontology knowledge model for the digital preservation of Subak culture in the form of Knowledge Management System (KMS). The development of model was based on three main stages, including requirement analysis, ontology development, and ontology assessments. Requirement analysis included data collection through field observations, in-depth interviews, and document analysis, while ontology development consisted of hierarchical classes, object and data properties, as well as individual entities. Furthermore, ontology assessments were the stage of evaluating and testing the resulting ontology. Protégé software was used to apply ontology model, generating Ontograph visualizations and producing Ontology Web Language (OWL). Validation was carried out using both Ontology Quality Analysis (OntoQA) and expert comments. The evaluation results showed a Relationship Richness (RR) value of 0.8, an Inheritance Richness (IR) value of 0.78, and an Attribute Richness (AR) value of 3.89, showing that ontology captured a comprehensive and representative body of knowledge. Expert comments stated that ontology model created was worthy of being used to represent Subak knowledge as a form of cultural preservation. The developed Subak ontology could serve as a foundational knowledge base for further research in related fields such as agricultural management, social organization, and cultural preservation.
Multilingual Parallel Corpus for Indonesian Low-Resource Languages Sulistyo, Danang Arbian; Wibawa, Aji Prasetya; Prasetya, Didik Dwi; Ahda, Fadhli Almu’iini; Arya Astawa, I Nyoman Gede; Andika Dwiyanto, Felix
JOIV : International Journal on Informatics Visualization Vol 9, No 5 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.5.3412

Abstract

Indonesia has an extraordinary number of languages, with more than 700 regional languages such as Javanese, Madurese, Balinese, Sundanese, and Bugis. Despite the wealth of languages, digital resources for these languages remain scarce, making the preservation and accessibility of digital languages a significant challenge. Research was conducted to address this gap by building a multilingual parallel corpus consisting of more than 150,000 phrase pairs extracted from Bible translations in five regional languages in Indonesia. Rigorous preprocessing, normalization, and Unicode tokenization were performed to improve data quality and consistency. The encoder-decoder architecture was a key focus in the development of the NMT model. Evaluation focused on forward and backward translation directions, which were measured using BLEU scores. The results show that forward translation consistently outperforms backward translation. The Indonesian Javanese model produced a score of 0.9939 for BLEU-1 and 0.9844 for BLEU-4, indicating a high level of translation quality. In contrast, reverse translation tasks, such as translating from Sundanese to Indonesian, presented significant challenges, with BLEU-4 scores as low as 0.3173. This illustrates the complexity of the translation system from Indonesian to local languages. If future research focuses on transformer-based models and incorporates additional linguistic parameters to enhance the accuracy of natural language processing (NLP) models for Indonesia's underrepresented regional languages, this work provides a dataset that can be utilized for that purpose.
Combination of Feature Extractions for Classification of Coral Reef Fish Types Using Backpropagation Neural Network Latumakulita, Luther Alexander; Arya Astawa, I Nyoman Gede; Mairi, Vitrail Gloria; Purnama, Fajar; Wibawa, Aji Prasetya; Jabari, Nida; Islam, Noorul
JOIV : International Journal on Informatics Visualization Vol 6, No 3 (2022)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.3.1082

Abstract

Feature extraction is important to obtain information in digital images, where feature extraction results are used in the classification process. The success of a study to classify digital images is highly dependent on the selection of the feature extraction method used, from several studies providing a combination of feature extraction solutions to produce a more accurate classification.  Classifying the types of marine fish is done by identifying fish based on special characteristics, and it can be through a description of the shape, fish body pattern, color, or other characteristics. This study aimed to classify coral reef fish species based on the characteristics contained in fish images using Backpropagation Neural Network (BPNN) method. Data used in this research was collected directly from Bunaken National Marine Park (BNMP) in Indonesia. The first stage was to extract shape features using the Geometric Invariant Moment (GIM) method, texture features using Gray Level Co-occurrence Matrix (GLCM) method, and color feature extraction using Hue Saturation Value (HSV) method. The third value of feature extraction was used as input for the next stage, namely the classification process using the BPNN method. The test results using 5-fold cross-validation found that the lowest test accuracy was 85%, the highest was 100%, and the average was 96%. This means that the intelligent model derived from the combination of the three feature extraction methods implemented in the BPNN training algorithm is very good for classifying coral reef fish.