Facial expression recognition (FER) is a highly active field with applications in computer vision, human-computer interaction, security, and computer graphics animation. Recent advancements in deep learning and machine learning have increased interest in utilizing these techniques for accurate facial expression classification. This paper presents a comparative study that evaluates the performance of deep learning and machine learning as classifiers in FER systems, specifically after data fusion. Data fusion techniques combine and integrate multiple sources of information, aiming to enhance the overall classification accuracy by extracting two types of features using geometrical and appearance features trained using two types of convolutional neural networks. The feature outputs of these networks are fused to create a final feature vector for the classification process. The study evaluates the performance of deep learning on two benchmark datasets, the extended Cohn-Kanade (CK+) and Oulu-CASIA datasets, to assess the performance of deep learning. As a point of comparison, the traditional machine learning approach based on the support vector machine (SVM) is also evaluated on the same datasets. Performance metrics such as classification accuracy, precision, recall, and F1-score are utilized. The results obtained from the study highlight the strengths and limitations of both deep learning and machine learning techniques when employed as classifiers in FER systems. Notably, the experimental results demonstrate that the deep learning approach significantly outperforms the baseline methods, achieving an increase in recognition accuracy of 5.22% for the CK+ and 3.07% for the Oulu-CASIA dataset.