In the digital era, Indonesian-language texts have rapidly proliferated across social media, online news, blogs, and digital documents, often containing various figurative language styles such as personification, metaphor, hyperbole, euphemism, and irony. Manual identification of these language styles is inefficient on a large scale, especially when class distribution is imbalanced. This study aims to compare the performance of the Naïve Bayes and K-Nearest Neighbor (KNN) algorithms in classifying figurative language styles in Indonesian texts, and to evaluate the impact of applying the Synthetic Minority Over-sampling Technique (SMOTE) and hyperparameter tuning on model accuracy. The dataset consists of 5,155 original samples and 6,240 samples after SMOTE application, with an 80:20 train-test split. Evaluation was conducted under four scenarios: without SMOTE and without tuning, with SMOTE without tuning, without SMOTE with tuning, and with both SMOTE and tuning. The results show that Naïve Bayes demonstrated stable performance with an accuracy of up to 93.19%, while KNN achieved its highest accuracy of 93.43% after applying SMOTE and tuning. The implementation of SMOTE and hyperparameter tuning proved effective in improving accuracy, particularly for KNN. This study highlights the significant contribution of data balancing and parameter optimization in enhancing the automatic classification of figurative language styles in Indonesian texts.