This study aims to address the difficulty of comparing deep learning–based brain cancer detection methods due to differences in datasets and parameter settings, which limits the generalizability of previous findings. The purpose of this research is to evaluate the performance of several convolutional neural network (CNN) architectures using identical datasets and experimental configurations to determine the most effective technique for early brain cancer detection. The study builds a comparative framework using the Keras API on TensorFlow, supported by libraries such as NumPy, Pandas, Matplotlib, and Seaborn. All datasets were split into stratified training, validation, and test sets, and preprocessing included resizing images to 224×224 pixels, converting them to 3-channel RGB, normalizing the inputs, and applying data augmentation. CNN architectures, including VGG16, ResNet50, GoogleNet, and AlexNet, were trained with consistent parameter settings, including epoch count, batch size, learning rate optimization, and training protocols. Performance evaluation using accuracy, precision, recall, and F1-score shows that GoogleNet and ResNet50 achieve the highest results across datasets (average >94%), with GoogleNet slightly outperforming ResNet50. AlexNet performs poorly on the Kaggle dataset but shows potential on the private dataset, while VGG16 demonstrates moderate but less consistent performance. The originality of this study lies in providing a unified evaluation framework that enables fair comparison across CNN models, offering valuable insights for selecting optimal architectures for brain cancer detection.