Existing automatic road map extraction approaches on remote sensing images often fail because they cannot understand the spatial context of an image. Mainly because they could not learn the spatial context of the image and only knew the structure or texture of the image. These approaches only focus on regional accuracy instead of connectivity. Therefore, most approaches produce discontinuous outputs caused by buildings, shadows, and similarity to rivers. This study addresses the challenge of automatic road extraction, focusing on enhancing road connectivity and segmentation accuracy by proposing a network-based road extraction that uses a spatial intensifier module (DULR) and densely connected U-Net architecture (SDUNet) with a connectivity-preserving loss function (CP_clDice) called CP_SDUNet. This study analyzes the CP_clDice loss function for the road extraction task compared to the BCE Loss function to train the SDUNet model. The result shows that CP_SDUNet, performs best using an image size of 128×128 pixels and trained with the whole dataset with a combination of 20% clDice and 80% dice loss. The proposed method obtains a clDice score of 0.85 and an Interest over Union (IoU) score of 0.65 for the testing data. These findings demonstrate the potential of CP_SDUNet for reliable road extraction.
Copyrights © 2025