Claim Missing Document
Check
Articles

Found 7 Documents
Search
Journal : JOIV : International Journal on Informatics Visualization

Deep Learning Models for Dental Conditions Classification Using Intraoral Images Makarim, Ahmad Fauzi; Karlita, Tita; Sigit, Riyanto; Bayu Dewantara, Bima Sena; Brahmanta, Arya
JOIV : International Journal on Informatics Visualization Vol 8, No 3 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.3.1914

Abstract

This paper presents the digitalization of dentistry medical records to support the dentist in the patient examination process. A dentist uses manual input to fill out the evaluation form by drawing and labeling each patient’s tooth condition based on their observations. Consequently, it takes too long to finish only one examination. For time efficiency, using AI-based digitalization technology can be a promising solution. To address the problem, we made and compared several classification models to recognize human dental conditions to help doctors analyze patient teeth. We apply the YOLOv5, MobileNet V2, and IONet (proposed CNN model) as deep learning models to recognize the five common human dental conditions: normal, filling, caries, gangrene radix, and impaction. We tested the ability of YOLO classification as an object detection model and compared it with classification models. We used a dataset of 3.708 intraoral dental images generated by various augmentation methods from 1.767 original images. We collected and annotated the dataset with the help of dentists. Furthermore, the dataset is divided into three parts: 90% of the total dataset is used as training and validation data, then divided again into 80% training data and 20% validation data. 10% of the total dataset will be used as testing data to compare classification performance. Based on our experiments, YOLOv5, as an object detection model, can classify dental conditions in humans better than the classification model. YOLOv5 produces an 82% accuracy testing value and performs better than the classification model. MobileNet V2 and IONet only get 80% and 70% testing accuracy. Although statistically, there is not much of a difference between the test accuracy values for YOLOv5 and MobileNet v2, the speed in classifying dental objects using YOLOv5 is more efficient, considering that YOLOv5 is an object detection model. There are still challenges with the deep learning technique used in this research, but these can be addressed in further development. A more complex model and the enlargement of more data, ensuring it is varied and balanced, can be used to address the limitations. 
Environmental Monitoring System using Wireless Multi-Node Sensors based Communication System on Volcano Observations Drones Huda, Achmad Torikul; Setiawardhana, Setiawardhana; Dewantara, Bima Sena Bayu; Sigit, Riyanto
JOIV : International Journal on Informatics Visualization Vol 8, No 2 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.2.1961

Abstract

Indonesia is on the Ring of Fire and has the world's most active volcanoes. Volcanic activity has a significant effect on the landscape and on the people who live there. The difficulty of evacuating and helping victims requires hard work and sometimes even the safety of the rescue team itself. For this reason, high-tech tools are needed. Unmanned aerial vehicles (UAVs), also called drones, have become a hopeful tool for remote environmental monitoring in recent years. The system design has a monitoring platform, gateway, and sensor nodes attached to the UAV, which monitors the content of toxic gas contamination in the air. Using IoT technology, sensor data is sent wirelessly to a central monitoring station for a thorough and accurate volcanic activity study. This system is a flexible and complete way to monitor volcanic activity, learn more about it, and make it easier to respond to disasters. Tests are also done to measure system speed, including latency, and determine network service quality. The results show that data is successfully sent in real-time from the sensor nodes to the monitoring system. The average Round-Trip time for the payload transmission is 446.046226 ms. This shows how well the system works to send data from the sensors connected to the UAV to the monitoring station. The UAV has sensor nodes and a monitoring system platform. These can be used to build and optimize disaster mitigation systems.
Face Recognition for Logging in Using Deep Learning for Liveness Detection on Healthcare Kiosks Ryando, Catoer; Sigit, Riyanto; Setiawardhana, Setiawardhana; Sena Bayu Dewantara, Bima
JOIV : International Journal on Informatics Visualization Vol 9, No 1 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.1.2759

Abstract

This study explores the enhancement of healthcare kiosks by integrating facial recognition and liveness detection technologies to address the limitations of healthcare service accessibility for a growing population. Healthcare kiosks increase efficiency, lessen the strain on conventional institutions, and promote accessibility. However, there are issues with conventional authentication methods like passwords and RFID, such as the possibility of them being lost, stolen, or hacked, which raises privacy and data security problems. Although it is more secure, face recognition is susceptible to spoofing attacks. In order to improve security, this study integrates liveness detection with face recognition. Data preparation is done using deep learning algorithms, namely FaceNet and Multi-task Cascaded Convolutional Neural Networks (MTCNN). Real-time authentication of persons is verified by the system, which provides correct identification of them. Techniques for enhancing data help the model become more accurate and robust. The system's usefulness is shown by the outcomes of the experiments. The VGG16 model outperforms alternative designs like MobileNet V2, ResNet-50, and DenseNet-121, achieving 100% accuracy in liveness detection. Face recognition and liveness detection together greatly improve security, which makes it a dependable option for real-world healthcare applications. Through the ability to differentiate between genuine and fake faces and foil spoofing efforts, facial liveness detection may boost security. This study offers insights into building biometric systems for safe and effective identity verification in the healthcare industry.
Classification of Intraoral Images in Dental Diagnosis Based on GLCM Feature Extraction Using Support Vector Machine Romadhon, Nur Rizky; Sigit, Riyanto; Dewantara, Bima Sena Bayu
JOIV : International Journal on Informatics Visualization Vol 9, No 4 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.4.3051

Abstract

This study aims to develop an AI-based diagnostic tool for classifying dental conditions and tooth types to enhance the accuracy and efficiency of dental diagnostics. Manual documentation and diagnosis in dentistry are often prone to errors, inefficiencies, and delays, leading to adverse patient outcomes. Leveraging digital image processing and machine learning, this research addresses these challenges by automating the classification process. Dental imaging data were collected from the Dental and Mouth Hospital (RSGM) of Nala Husada Surabaya, Indonesia, comprising 3,910 images categorized into dental conditions (1,767 images) and tooth types (2,143 images). The dataset was preprocessed through resizing, grayscale conversion, histogram equalization, and median filtering. Texture features were extracted using the Gray Level Co-occurrence Matrix (GLCM), and classification was performed using Support Vector Machine (SVM), K-Nearest Neighbor, Naïve Bayes, Decision Tree, and Random Forest algorithms. The SVM algorithm achieved the highest accuracy of 54.24% for dental conditions and 41.49% for tooth types, outperforming other methods. However, the overall performance was suboptimal, primarily due to dataset limitations, reliance on GLCM for feature extraction, and insufficient preprocessing. The results highlight the potential of AI-based tools in dentistry but also underscore the need for improvements in dataset diversity, advanced feature extraction methods, and hyperparameter optimization. Future research should focus on expanding the dataset, exploring deep learning-based feature extraction, and employing robust evaluation strategies to enhance model performance. This study lays the groundwork for developing a more reliable and efficient AI-based diagnostic tool, ultimately improving patient outcomes and streamlining clinical workflows in dentistry.
Real-Time Tuberculosis Bacteria Detection Using YOLOv8 Sigit, Riyanto; Yuniarti, Heny; Karlita, Tita; Kusumawati, Ratna; Maulana, Firja Hanif
JOIV : International Journal on Informatics Visualization Vol 9, No 5 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.5.3147

Abstract

Tuberculosis (TB) is a contagious disease caused by the bacterium Mycobacterium tuberculosis. If not adequately managed, TB can become a fatal, life-threatening condition. In Indonesia, TB remains a critical public health issue, with millions affected and the country ranking third globally in TB cases, following India and China. Symptoms of TB include persistent cough lasting more than three weeks, hemoptysis (bloody sputum), fever, chest pain, and night sweats. The widely used diagnostic method in Indonesia is the Ziehl-Neelsen stained sputum smear technique, which processes sputum samples with specific reagents, allowing acid-fast bacilli to be visualized through microscopic examination. However, this process is labor-intensive and time-consuming, often requiring between half an hour and several hours for an accurate diagnosis. To address these challenges, there is a crucial need to develop technology that accelerates the TB diagnosis process, facilitating easier labor for healthcare workers. This study focuses on employing YOLOv8 to automate the detection of acid-fast bacilli. The system acquires sputum sample images from a microscope, and the acquired data is then used to train the model for detecting tuberculosis bacteria. The proposed real-time approach, employing the YOLOv8 algorithm, has demonstrated adequate performance for one of our specialized models, achieving a precision score of 0.88, a recall score of 0.77, and an F1 score of 0.82. This research aims to enhance TB case detection and increase treatment coverage, thereby improving overall public health outcomes in Indonesia.
Human Bone Age Estimation of Carpal Bone X-Ray Using Residual Network with Batch Normalization Classification Nabilah, Anisah; Sigit, Riyanto; Fariza, Arna; Madyono, Madyono
JOIV : International Journal on Informatics Visualization Vol 7, No 1 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.1.1024

Abstract

Bone age is an index used by pediatric radiology and endocrinology departments worldwide to define skeletal maturity for medical and non-medical purposes. In general, the clinical method for bone age assessment (BAA) is based on examining the visual ossification of individual bones in the left hand and then comparing it with a standard radiographic atlas of the hand. However, this method is highly dependent on the experience and conditions of the forensic expert. This paper proposes a new approach to age estimation of human bone based on the carpal bones in the hand and using a residual network architecture. The classification layer was modified with batch normalization to optimize the training process. Before carrying out the training process, we performed an image augmentation technique to make the dataset more varied. The following augmentation techniques were used: resizing; random affine transformation; horizontal flipping; adjusting brightness, contrast, saturation, and hue; and image inversion. The output is the classification of bone age in the range of 1 to 19 years. The results obtained when using a VGG16 model were an MAE value of 5.19 and an R2 value of 0.56 while using the newly developed ResNeXt50(32x4d) model produced an MAE value of 4.75 and an R2 value of 0.63. The research results indicate that the proposed modification of the residual training model improved classification compared to using the VGG16 model, as indicated by an MAE value of 4.75 and an R2 value of 0.63.
Face Recognition Using Convolution Neural Network Method with Discrete Cosine Transform Image for Login System Setiawan, Ari; Sigit, Riyanto; Rokhana, Rika
JOIV : International Journal on Informatics Visualization Vol 7, No 2 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.2.1546

Abstract

These days, the application of image processing in computer vision is becoming more crucial. Some situations necessitate a solution based on computer vision and growing deep learning. One method continuously developed in deep learning is the Convolutional Neural Network, with MobileNet, EfficientNet, VGG16, and others being widely used architectures. Using the CNN architecture, the dataset consists primarily of images; the more datasets there are, the more image storage space will be required. Compression via the discrete cosine transform technique is a method to address this issue. We implement the DCT compression method in the present research to get around the system's limited storage space. Using DCT, we also compare compressed and uncompressed images. All users who had been trained with each test 5 times for a total of 150 tests were given the test. Based on testing findings, the size reduction rate for compressed and uncompressed images is measured at 25%. The case study presented is face recognition, and the training results indicate that the accuracy of compressed images using the DCT approach ranges from 91.33% to 100%. Still, the accuracy of uncompressed facial images ranges from 98.17% to 100%. In addition, the accuracy of the proposed CNN architecture has increased to 87.43%, while the accuracy of MobileNet has increased by 16.75%. The accuracy of EfficientNetB1 with noisy-student weights is measured at 74.91%, and the accuracy of EfficientNetB1 with imageNet weights can reach 100%. Facial biometric authentication using a deep learning algorithm and DCT-compressed images was successfully accomplished with an accuracy value of 95.33% and an error value of 4.67%.