Claim Missing Document
Check
Articles

Found 2 Documents
Search

CP_SDUNet: road extraction using SDUNet and centerline preserving dice loss Persada, Bayu Satria; Susanto, Muhammad Rifqi Priyo; Rahadianti, Laksmita; Arymurthy, Aniati Murni
IAES International Journal of Robotics and Automation (IJRA) Vol 14, No 2: June 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijra.v14i2.pp260-272

Abstract

Existing automatic road map extraction approaches on remote sensing images often fail because they cannot understand the spatial context of an image. Mainly because they could not learn the spatial context of the image and only knew the structure or texture of the image. These approaches only focus on regional accuracy instead of connectivity. Therefore, most approaches produce discontinuous outputs caused by buildings, shadows, and similarity to rivers. This study addresses the challenge of automatic road extraction, focusing on enhancing road connectivity and segmentation accuracy by proposing a network-based road extraction that uses a spatial intensifier module (DULR) and densely connected U-Net architecture (SDUNet) with a connectivity-preserving loss function (CP_clDice) called CP_SDUNet. This study analyzes the CP_clDice loss function for the road extraction task compared to the BCE Loss function to train the SDUNet model. The result shows that CP_SDUNet, performs best using an image size of 128×128 pixels and trained with the whole dataset with a combination of 20% clDice and 80% dice loss. The proposed method obtains a clDice score of 0.85 and an Interest over Union (IoU) score of 0.65 for the testing data. These findings demonstrate the potential of CP_SDUNet for reliable road extraction.
MSDFF-RCNet: A Combined Multi-Structure Data Fusion Framework and Recurrent Attention for Remote Sensing Scene Classification Hestrio, Yohanes; Persada, Bayu Satria; Saragih, Frederic Morado; Kardawi, Muhammad Yusuf; Jatmiko, Wisnu; Arymurthy, Aniati Murni
Jurnal Ilmu Komputer dan Informasi Vol. 19 No. 1 (2026): Jurnal Ilmu Komputer dan Informasi (Journal of Computer Science and Informatio
Publisher : Faculty of Computer Science - Universitas Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21609/jiki.v19i1.1475

Abstract

Remote sensing scene classification faces significant challenges in distinguishing visually similar land-use categories due to high intraclass variation and interclass similarity in high-resolution imagery. Although deep learning approaches have shown promise, single-architecture methods often fail to capture the diverse spatial and hierarchical features required for robust scene discrimination. This study proposes MSDFF-RCNet, a multi-structure data fusion framework combined with recurrent attention mechanisms to enhance remote sensing scene classification performance. The framework integrates complementary feature representations from AlexNet, ResNet50, and DenseNet161 architectures, while the recurrent attention mechanism focuses on discriminative spatial regions for improved classification accuracy. Comprehensive experiments conducted on four benchmark datasets demonstrate substantial performance improvements over the baseline ARCNet architecture: UC Merced (43.8% to 84.9%, +41.1%), AID (63.8% to 94.4%, +30.6%), NWPU-RESISC45 (61.5% to 95.4%, +33.9%), and OPTIMAL 31 (47.3% to 87.9%, +40.6%). Statistical significance analysis confirmed the reliability of these improvements (p < 0.01), while comprehensive evaluation across precision, recall, and F1-score metrics validated the framework’s robustness. Although the multi-structure approach requires substantial computational resources (25.6× parameter increase), the consistent and significant accuracy improvements across diverse datasets demonstrate the effectiveness of complementary feature fusion for remote sensing scene classification. The proposed framework provides a valuable contribution to automated Earth observation systems that require high-precision land-use classification capabilities.