Claim Missing Document
Check
Articles

Found 5 Documents
Search
Journal : Communications in Science and Technology

Segmentation of retinal blood vessels for detection of diabetic retinopathy: A review Aras, Rezty Amalia; Lestari, Tri; Nugroho, Hanung Adi; Ardiyanto, Igi
Communications in Science and Technology Vol 1 No 1 (2016)
Publisher : Komunitas Ilmuwan dan Profesional Muslim Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21924/cst.1.1.2016.13

Abstract

Diabetic detinopathy (DR) is effect of diabetes mellitus to the human vision that is the major cause of blindness. Early diagnosis of DR is an important requirement in diabetes treatment. Retinal fundus image is commonly used to observe the diabetic retinopathy symptoms. It can present retinal features such as blood vessel and also capture the pathologies which may lead to DR. Blood vessel is one of retinal features which can show the retina pathologies. It can be extracted from retinal image by image processing with following stages: pre-processing, segmentation, and post-processing. This paper contains a review of public retinal image dataset and several methods from various conducted researches. All discussed methods are applicable to each researcher cases. There is no further analysis to conclude the best method which can be used for general cases. However, we suggest morphological and multiscale method that gives the best accuracy in segmentation.
Dark lesion elimination based on area, eccentricity and extent features for supporting haemorrhages detection Yulyanti, Vesi; Adi Nugroho, Hanung; Ardiyanto, Igi; Oktoeberza, Widhia KZ
Communications in Science and Technology Vol 4 No 1 (2019)
Publisher : Komunitas Ilmuwan dan Profesional Muslim Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (410.365 KB) | DOI: 10.21924/cst.4.1.2019.110

Abstract

One of the complications due to the long-term of diabetes is retinal vessels damaging called diabetic retinopathy. It is characterised by appearing the bleeding spots in the large size (haemorrhages) on the surface of retina. Early detection of haemorrhages is needed for preventing the worst effect which leads to vision loss. This study aims to detect haemorrhages by eliminating other dark lesion objects that have similar characteristics with haemorrhages based on three features, i.e. area, eccentricity and extent features. This study uses 43 retinal fundus images taken from DIARETDB1 database. Based on the validation process, the average level of sensitivity gained is 80.5%. These results indicate that the proposed method is quite capable of detecting haemorrhages which appear in the retinal surface.
Comparison of text-image fusion models for high school diploma certificate classification Atmaja Perdana, Chandra Ramadhan; Adi Nugroho, Hanung; Ardiyanto, Igi
Communications in Science and Technology Vol 5 No 1 (2020)
Publisher : Komunitas Ilmuwan dan Profesional Muslim Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (901.175 KB) | DOI: 10.21924/cst.5.1.2020.172

Abstract

File scanned documents are commonly used in this digital era. Text and image extraction of scanned documents play an important role in acquiring information. A document may contain both texts and images. A combination of text-image classification has been previously investigated. The dataset used for those research works the text were digitally provided. In this research, we used a dataset of high school diploma certificate, which the text must be acquired using optical character recognition (OCR) method. There were two categories for this high school diploma certificate, each category has three classes. We used convolutional neural network for both text and image classifications. We then combined those two models by using adaptive fusion model and weight fusion model to find the best fusion model. We come into conclusion that the performance of weight fusion model which is 0.927 is better than that of adaptive fusion model with 0.892.
Decoding brain tumor insights: Evaluating CAM variants with 3D U-Net for segmentation Hardani, Dian Nova Kusuma; Ardiyanto, Igi; Adi Nugroho, Hanung
Communications in Science and Technology Vol 9 No 2 (2024)
Publisher : Komunitas Ilmuwan dan Profesional Muslim Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21924/cst.9.2.2024.1477

Abstract

Brain tumor segmentation is critical for effective diagnosis and treatment planning. While, conventional manual segmentation techniques are seen inefficient and variable, highlighting the need for automated methods. This study enhances medical image analysis, particularly in brain tumor segmentation by improving the explainability and accuracy of deep learning models, which are essential for clinical trust. Using the 3D U-Net architecture with the BraTS 2020 dataset, the study achieved precise localization and detailed segmentation with the mean recall values of 0.8939 for Whole Tumor (WT), 0.7941 for Enhancing Tumor (ET), and 0.7846 for Tumor Core (TC). The Dice coefficients were 0.9065 for WT, 0.8180 for TC, and 0.7715 for ET. By integrating explainable AI techniques, such as Class Activation Mapping (CAM) and its variants (Grad-CAM, Grad-CAM++, and Score-CAM), the study ensures high segmentation accuracy and transparency. Grad-CAM, in this case, provided the most reliable and detailed visual explanations, significantly enhancing model interpretability for clinical applications. This approach not only enhances the accuracy of brain tumor segmentation but also builds clinical trust by making model decisions more transparent and understandable. Finally, the combination of 3D U-Net and XAI techniques supports more effective diagnosis, treatment planning, and patient care in brain tumor management.
Evaluating the effectiveness of facial actions features for the early detection of driver drowsiness in driving safety monitoring system Rahmawati, Yenny; Woraratpanya, Kuntpong; Ardiyanto, Igi; Adi Nugroho, Hanung
Communications in Science and Technology Vol 10 No 1 (2025)
Publisher : Komunitas Ilmuwan dan Profesional Muslim Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21924/cst.10.1.2025.1594

Abstract

Traffic accidents caused by drowsiness continue to pose a serious threat to road safety. Many of these accidents can be prevented by alerting drivers when they begin to feel sleepy. This research introduces a non-invasive system for detecting driver drowsiness based on visual features extracted from videos captured by a dashboard-mounted camera. The proposed system utilizes facial landmark points and a facial mesh detector to identify key areas where the mouth aspect ratio, eye aspect ratio, and head pose are analyzed. These features are then fed into three different classification models: 1D-CNN, LSTM, and BiLSTM. The system’s performance was evaluated by comparing the use of these features as indicators of driver drowsiness. The results show that combining all three facial features is more effective in detecting drowsiness than using one or two features alone. The detection accuracy reached 0.99 across all tested models.