Brain tumor segmentation in MRI scans is a crucial task in medical imaging, enabling early diagnosis and treatment planning. However, accurately segmenting tumors remains a challenge due to variations in tumor shape, size, and intensity. This study proposes a ResNet-UNet-based segmentation model using LGG dataset (from 110 patients), optimized through hyperparameter tuning to enhance segmentation performance and computational efficiency. The proposed model integrates different ResNet architectures (ResNet18, ResNet34, ResNet50, ResNet101, and ResNet152) with UNet, evaluating their performance under various learning rates (0.01, 0.001, 0.0001), optimizer types (Adam, SGD, RMSProp), and activation functions (Sigmoid). The methodology involves training and evaluating each model using Loss Function, Mean Intersection over Union (mIoU), Dice Similarity Coefficient (DSC), and Iterations per Second as performance metrics. Experiments were conducted on MRI brain tumor datasets to assess the impact of hyperparameter tuning on model performance. Results show that lower learning rates (0.0001 and 0.001) improve segmentation accuracy, while Adam and RMSProp outperform SGD in minimizing segmentation errors. Deeper models (ResNet50, ResNet101, and ResNet152) achieve the highest mIoU (up to 0.902) and DSC (up to 0.928), but at the cost of slower inference speeds. ResNet50 and ResNet34 with RMSProp or Adam provide an optimal trade-off between accuracy and computational efficiency. In conclusion, hyperparameter tuning significantly impacts MRI segmentation performance, and selecting an appropriate learning rate, optimizer, and model depth is crucial for achieving high segmentation accuracy with minimal computational cost.
                        
                        
                        
                        
                            
                                Copyrights © 2025