This research aims to investigate the effectiveness of hyperparameter tuning, particularly using Optuna, in enhancing the classification performance of machine learning models on scientific work reviews. The study focuses on automating the classification of academic papers into eight distinct fields: decision support systems, information technology, data science, technology education, artificial intelligence, expert systems, image processing, and information systems. The research dataset comprises reviews of scientific papers ranging from 150 to 500 words, collected from the repository of Universitas Putra Indonesia YPTK Padang. The classification process involved the application of the TF-IDF method for feature extraction, followed using various machine learning algorithms including SVM, MNB, KNN, and RF, with and without the integration of SMOTE for data balancing and Optuna for hyperparameter optimization. The results show that combining SMOTE with Optuna significantly improves the accuracy, precision, recall, and F1-score of the models, with the SVM algorithm achieving the highest accuracy at 90%. Additionally, the research explored the effectiveness of ensemble methods, revealing that hard voting combined with SMOTE and Optuna provided substantial improvements in classification performance. These findings underscore the importance of hyperparameter tuning and data balancing in optimizing machine learning models for text classification tasks. The implications of this research are broad, suggesting that the methodologies developed can be applied to various text classification tasks in different domains. Future research should consider exploring other hyperparameter tuning techniques and ensemble methods to further enhance model performance across diverse datasets.