This study evaluates environmental quality and urban walkability in the Dongdaemun district through geospatial semantic segmentation of street-view imagery. A DeepLab ResNet101 model, pre-trained on the ADE20K dataset and implemented using the GluonCV framework, was applied to Google Street View images collected at 40-meter intervals in four cardinal directions. Pixel-level segmentation was used to quantify key environmental features such as greenery, sky visibility, pavement, and road surfaces. Based on these visual attributes, composite indicators representing comfort, convenience, and safety were derived, leading to the calculation of an Integrated Visual Walkability index. The results reveal clear spatial variations in walkability across the study area, highlighting areas with favorable pedestrian environments and zones requiring improvement. Although the analysis is constrained by image quality and spatial coverage, the findings demonstrate the effectiveness of deep learning–based semantic segmentation for large-scale environmental assessment. This approach provides a scalable and data-driven framework to support evidence-based urban planning and sustainable city development.
Copyrights © 2025