Lung cancer segmentation with positron emission tomography (PET) and computed tomography (CT) images plays a critical role to accurately detect lung cancer. Nevertheless, lung tumor segmentation in PET/CT images were extremely difficult due to the movement caused by respiration. Despite this fact, the lung tumor images shown large number of variations mostly in PET images and CT images. As PET-CT images are acquired concurrently the shape and size of lung tumor varies according to modality. To address these issues, we developed a residual edge dense enhanced module network (REDEM-NET) framework for lung tumor stage classification. The proposed REDEM-NET can process PET and CT images as inputs. In addition, the dense residual convolutional network (DRCN) collects both inputs and extracts high-dimensional features concurrently. The extracted features from both imaging modalities were fed into UNet+++ to obtain multi-level decoded features. The extracted decoded features are concurrently supplied to the pixel level learning module (PELM) and edge level learning module (E2LM) which resulting in two outputs for subsequent learning. The outputs were merged to provide a very precise lung tumor segmentation. Furthermore, segmented tumor was fed to multi-class support vector machine (MC-SVM) for lung tumor stage classification. Moreover, it was able to identify three stages and its substages namely primary tumor, region lymph node and distant metastasis.
Copyrights © 2025