Claim Missing Document
Check
Articles

Found 1 Documents
Search

Improving Accuracy and Efficiency of Medical Image Segmentation Using One-Point-Five U-Net Architecture with Integrated Attention and Multi-Scale Mechanisms Fathur Rohman, Muhammad Anang; Prasetyo, Heri; Yudha, Ery Permana; Hsia, Chih-Hsien
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 3 (2025): July
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i3.949

Abstract

Medical image segmentation is essential for supporting computer-aided diagnosis (CAD) systems by enabling accurate identification of anatomical and pathological structures across various imaging modalities. However, automated medical image segmentation remains challenging due to low image contrast, significant anatomical variability, and the need for computational efficiency in clinical applications. Furthermore, the scarcity of annotated medical images due to high labelling costs and the requirement of expert knowledge further complicates the development of robust segmentation models. This study aims to address these challenges by proposing One-Point-Five U-Net, a novel deep learning architecture designed to improve segmentation accuracy while maintaining computational efficiency. The main contribution of this work lies in the integration of multiple advanced mechanisms into a compact architecture: ghost modules, Multi-scale Residual Attention (MRA), Enhanced Parallel Attention (EPA) in skip connections, the Convolutional Block Attention Module (CBAM), and Multi-scale Depthwise Convolution (MSDC) in the decoder. The proposed method was trained and evaluated on four public datasets: CVC-ClinicDB, Kvasir-SEG, BUSI, and ISIC2018. One-Point-Five U-Net achieved sensitivity, specificity, accuracy, DSC, and IoU of of 94.89%, 99.63%, 99.23%, 95.41%, and 91.27% on CVC-ClinicDB; 91.11%, 98.60%, 97.33%, 90.93%, and 83.84% on Kvasir-SEG; 85.35%, 98.65%, 96.81%, 87.02%, and 78.18% on BUSI; and 87.67%, 98.11%, 93.68%, 89.27%, and 83.06% on ISIC2018. These results outperform several state-of-the-art segmentation models. In conclusion, One-Point-Five U-Net demonstrates superior segmentation accuracy with only 626,755 parameters and 28.23 GFLOPs, making it a highly efficient and effective model for clinical implementation in medical image analysis.