Activation functions are a critical component in the feature extraction layer of deep learning models, influencing their ability to identify patterns and extract meaningful features from input data. This study investigates the impact of five widely used activation functions—ReLU, SELU, ELU, sigmoid, and tanh—on convolutional neural network (CNN) performance when combined with sharpening filters for feature extraction. Using a custom-built CNN program module within the researchers’ machine learning library, Analytical Libraries for Intelligent-computing (ALI), the performance of each activation function was evaluated by analyzing mean squared error (MSE) values obtained during the training process. The findings revealed that ReLU consistently outperformed other activation functions by achieving the lowest MSE values, making it the most effective choice for feature extraction tasks using sharpening filters. This study provides practical and theoretical insights, highlighting the significance of selecting suitable activation functions to enhance CNN performance. These findings contribute to optimizing CNN architectures, offering a valuable reference for future work in image processing and other machine-learning applications that rely on feature extraction layers. Additionally, this research underscores the importance of activation function selection as a fundamental consideration in deep learning model design.