Graphical Abstract Highlight Research The ResNet50 presented the highest accuracy for classifying 20 marine fish species in the study. The performance comparison demonstrated that ResNet50 outperformed both AlexNet and GoogLeNet. Transfer learning enabled effective feature extraction from limited datasets. Deep learning models offer potential for automating the classification of marine fish Abstract Identifying marine fish species accurately can be difficult due to their subtle anatomical and colour pattern similarities, which often result in misclassification during ecological assessments and fisheries operations. Manual identification methods are time-consuming and prone to errors especially in high throughput environments such as fish markets. In this study, transfer learning is used to evaluate three deep learning models ResNet-50, AlexNet and GoogLeNet on a total of 20,325 images from twenty marine fish species acquired from Kuantan (Pahang) and Mengabang Telipot (Kuala Nerus), Malaysia. All images were morphologically classified as complete fish, head, body and tail. The dataset was subjected to preprocessing procedures which encompassed image resizing, pixel normalization and data augmentation techniques that consists of random rotation (±15°), horizontal flipping, adjustments to brightness and contrast (±20%) and cropping. Subsequently, the dataset was partitioned into 80% training set (16,260 images), 10% validation set (2,032 images) and 10% testing set (2,033 images). The classification patterns were analysed using confusion matrices and standard metrics such as accuracy, precision and recall. ResNet-50 outperformed other models achieving ideal results with 100% accuracy, precision and recall in every category. With 99.5% and 99.4% accuracy, GoogleNet and AlexNet came in second and third, respectively. This study shows that deep learning models especially ResNet-50 achieved an accurate and efficient way to classify fish species automatically. With multi-view images, data augmentation and transfer learning, the model performs well even in difficult visual conditions. These results support its use in real-time fisheries monitoring, biodiversity studies, and environmental impact assessments