Claim Missing Document
Check
Articles

Found 2 Documents
Search
Journal : Jurnal Mandiri IT

Fetal heart chamber segmentation on fetal echocardiography image using deep learning Sutarno, Sutarno; Rachmatullah, Muhammad Naufal; Abdurahman; Isnanto, Rahmat Fadli
Jurnal Mandiri IT Vol. 14 No. 1 (2025): July: Computer Science and Field.
Publisher : Institute of Computer Science (IOCS)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35335/mandiri.v14i1.416

Abstract

Advances in medical imaging and utilization have encouraged the development of more sophisticated image analysis technologies. In this context, image segmentation acts as a fundamental preprocessing step, but fetal echocardiography (FE) image segmentation still faces challenges in terms of accuracy and efficiency. The dataset for developing the FE image segmentation model was obtained from the examination results of patients at Muhammad Husein Hospital (RSMH) in Palembang who had normal conditions, atrial septal defect (ASD), ventricular septal defect (VSD), and atrioventricular septal defect (AVSD), totaling 650 FE images, which have been verified by experts. Compared to previous studies, this study focuses on creating a DL-based segmentation model for FE images using an open-source framework and the Python MIScnn library, which is specifically designed for medical imaging. This differs from previous DL frameworks that are more general, such as TensorFlow or PyTorch, which do not emphasize specialization for medical imaging. Furthermore, in an effort to improve model accuracy and efficiency, various configurations were tested, including variations in batch size and loss functions. the Model performance evaluation was conducted comprehensively using various important metrics in addition to pixel accuracy and IoU, such as F1 score, average accuracy, precision, recall, and False Positive Rate (FPR). This method is expected to provide a more in-depth picture of model performance compared to previous studies that may have only considered a few metrics. The best results were achieved using the U-Net architecture with a batch size of 32 and the binary cross-entropy loss function. This U-Net model demonstrated excellent overall performance, achieving a pixel accuracy of 0.996, an IoU of 0.995, a mean accuracy of 0.965, an FPR of 0.004, a precision of 0.929, a recall of 0.933, and an F1-score of 0.941. These findings highlight the significant potential of deep learning methods in improving the accuracy and efficiency of fetal echocardiography image analysis.
Performance analysis of MobileNetV2 based automatic waste classification using transfer learning Firnando, Ricy; Buchari, Muhammad Ali; Marjusalinah, Anna Dwi; Willy; Abdurahman; Isnanto, Rahmat Fadli
Jurnal Mandiri IT Vol. 14 No. 1 (2025): July: Computer Science and Field.
Publisher : Institute of Computer Science (IOCS)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35335/mandiri.v14i1.451

Abstract

The significant increase in global waste requires innovative and accessible solutions, which aligns with Sustainable Development Goal (SDG) 12, which focuses on reducing the environmental impact of human activities. Automatic waste sorting using Computer Vision and Deep Learning offers a promising alternative to labor-intensive and risky manual methods. This study presents the design, implementation, and comprehensive performance analysis of an automated waste classification system, with a specific focus on evaluating its feasibility on hardware without specialized GPU accelerators. By leveraging transfer learning on a lightweight Convolutional Neural Network (CNN) architecture, MobileNetV2, a model was trained to classify six common waste categories: cardboard, glass, metal, paper, plastic, and other waste. The public “Garbage Classification” dataset from Kaggle, consisting of 2,527 images, was used as the basis for training and validation. The experiment was conducted using the tensorflow-cpu library, which does not require a dedicated GPU accelerator. After 10 training epochs, the model achieved a significant validation accuracy of 86.73%. Computational performance analysis showed an efficient average training time of 31.17 seconds per epoch and a fast average inference time of 14.47 milliseconds per image (~69 FPS) on the validation dataset. These findings demonstrate the feasibility of developing an effective AI-based waste classification system on hardware without a GPU accelerator, providing a realistic performance benchmark for the development of low-cost smart bins with embedded waste sorting solutions in the future, thereby contributing to sustainable waste management practices.