Participant feedback in the form of comments, criticisms, and suggestions is generally unstructured text data, which has not been optimally utilized as supporting information for training evaluation. This study aims to classify the sentiment of training participants’ feedback into positive, negative, and neutral categories using a machine learning approach, as well as to compare the performance of the Naïve Bayes and Support Vector Machine (SVM) algorithms. In addition, this study examines the effect of hyperparameter optimization on the performance of sentiment analysis models. The research methodology includes text preprocessing, feature extraction using Term Frequency–Inverse Document Frequency (TF-IDF), sentiment classification modeling, and performance evaluation using accuracy, precision, recall, and F1-score metrics. Model optimization is conducted through hyperparameter tuning using Grid Search and Random Search methods. The results show that, out of 487 participant feedback comments, the sentiment distribution is dominated by positive sentiment. Model evaluation indicates that the Support Vector Machine algorithm consistently achieves higher accuracy than Naïve Bayes, with the highest accuracy reaching 79.0%, while Naïve Bayes achieves a maximum accuracy of 65.3%. Furthermore, hyperparameter optimization is shown to improve the performance of both algorithms, particularly for Naïve Bayes. However, the findings are descriptive in nature and are intended to complement, rather than replace, existing survey-based methods or training management evaluation processes.