Accurate representation of visual characteristics in post-disaster building imagery is crucial for downstream analytical tasks such as damage interpretation, retrieval, and automated assessment. This study presents a focused investigation of feature extraction using a hybrid approach that integrates deep semantic representations from the GoogLeNet architecture with statistical texture descriptors inspired by the Gray-Level Co-Occurrence Matrix (GLCM). The objective of this work is limited strictly to the generation and analysis of semantic–textural feature vectors rather than the development or evaluation of any classification or prediction model. High-level feature maps are obtained from a selected convolutional layer of GoogLeNet, after which statistical texture properties—contrast, energy, and homogeneity—are computed per channel. A representative set of feature channels is analyzed to demonstrate the capabilities of the proposed hybrid extraction pipeline. The results demonstrate the potential of semantic–textural descriptors to provide interpretable feature characteristics in building-damage imagery. This study provides a methodological foundation and analytical insight for future works that may incorporate these feature representations into classification, clustering, or decision-support frameworks.
Copyrights © 2026