This study examines the performance of a fine-tuned Multilingual BERT (mBERT) model for sentiment analysis of tourist reviews on Balinese cultural attractions. A multilingual dataset comprising 7,878 user-generated reviews from Google Maps and TripAdvisor was utilized to capture diverse linguistic expressions and visitor perspectives. The research methodology includes: (1) problem formulation and literature review; (2) dataset collection, preprocessing, and tokenization; (3) model training using mBERT as the baseline; (4) fine-tuning for domain adaptation; and (5) comparative evaluation with other Transformer models (XLM-Roberta and Distil-mBERT) and classical algorithms including Logistic Regression, Support Vector Machine, and Naïve Bayes. The results demonstrate a substantial improvement after fine-tuning. The baseline mBERT achieved 85.45% accuracy, while the fine-tuned model reached 92.13% accuracy with an AUC of 0.909, confirming the effectiveness of domain-specific adaptation. Although XLM-Roberta obtained slightly higher performance (93.15% accuracy, AUC 0.946), the fine-tuned mBERT showed stable and competitive results, making it the primary model of this study. Comparisons with classical methods further indicate that Transformer-based approaches provide more balanced and reliable sentiment classification. Sentiment distribution analysis reveals that tourist perceptions are predominantly positive, particularly regarding cultural authenticity and the quality of performances such as the Kecak and Fire Dance. Negative sentiments mainly relate to operational aspects, including crowd management, seating arrangements, and ticketing processes. Overall, this study provides empirical evidence that fine-tuned mBERT can effectively support data-driven evaluation of tourist experiences and deliver actionable insights for improving service quality and sustainability of Bali’s cultural tourism