This study presents a comprehensive comparative analysis of Convolutional Neural Network (CNN)-based deep learning architectures for early brain tumor detection and classification using multi-modal medical imaging. The primary objective is to evaluate and integrate advanced deep neural network models, including EfficientNet-B2, VGG16, U-Net, and a hybrid CNN-LSTM, to enhance diagnostic accuracy, precision, and robustness. The proposed framework involves five key stages: image acquisition from MRI, CT, PET, and ultrasound modalities; preprocessing through normalization, skull stripping, noise reduction, and registration; segmentation of tumor regions; feature extraction; and classification using optimized deep learning algorithms. Experimental evaluation demonstrates that the hybrid CNN-LSTM model achieved the highest overall performance, with an accuracy of 98.81%, precision of 98.90%, recall of 98.90%, and F1-score of 99%. The EfficientNet-B2 model followed closely with 98.73% accuracy, 98.73% precision, 99.13% recall, and 98.79% F1-score, confirming its strength in efficient feature utilization and computational scalability. In contrast, VGG16 and U-Net achieved accuracies of 93.27% and 88%, respectively, indicating limited adaptability to complex tumor morphologies. The findings reveal that CNN-based hybrid models outperform traditional architectures by effectively capturing both spatial and temporal dependencies in MRI data, leading to improved interpretability and clinical reliability. The novelty of this research lies in its methodological integration of convolutional and recurrent layers within a unified diagnostic framework, establishing a reproducible, high-performance model for early brain tumor detection. The study contributes to the advancement of intelligent medical imaging systems by demonstrating that hybrid deep learning architectures can significantly reduce diagnostic uncertainty and enable more precise, automated clinical decision support for early intervention.
Copyrights © 2026