Sentiment analysis of movie reviews often exhibits genre-based bias, where model performance varies significantly across subgroups—an issue that standard accuracy metrics can mask. To address this, we propose a novel fairness-aware hybrid model, BERT-SVM (Fairness-Tuned), which integrates sample re-weighting focused on the lowest-performing genre into the BERT-SVM pipeline. Using a public IMDb movie review dataset from Kaggle, we first train a standard BERT-SVM model and identify Horror as the weakest-performing genre (accuracy: 72.3%, vs. overall 89.6%). We then apply targeted re-weighting to upsample underrepresented or misclassified Horror samples during training. The Fairness-Tuned model reduces the accuracy gap by 62%, raising Horror genre accuracy to 83.1% while maintaining strong overall performance (87.4%). This work not only quantifies the fairness–accuracy trade-off but also demonstrates that lightweight, genre-specific bias mitigation within a hybrid architecture can effectively enhance equity without drastic model redesign—highlighting the value of explicit fairness evaluation in NLP applications
Copyrights © 2026