Claim Missing Document
Check
Articles

Found 1 Documents
Search

Fine-Tuning the Gemini 1.5 Flash Large Language Model for User Perception Classification in BSI Mobile Application Reviews Fidelis, Rio; Vicraj, Vicraj; Bangun, Dea Monica; Mayanti, Nur; Indra, Evta
Jurnal Ilmiah Multidisiplin Indonesia (JIM-ID) Vol. 4 No. 05 (2025): Multidisiplin Indonesia (JIM-ID), Mey 2025
Publisher : Sean Institute

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

he growing volume of user reviews on digital platforms such as the Google Play Store presents a major challenge in automatically understanding user perceptions, especially due to the unstructured, varied, and highly subjective nature of the text data. Manual analysis at this scale is inefficient and prone to bias. To address this issue, this study applies fine-tuning on the Large Language Model (LLM) Gemini 1.5 Flash to automatically classify user perceptions of the BSI Mobile application. Perceptions are categorized into three classes: Very Poor, Fair, and Excellent. A total of 120,000 reviews were collected via web scraping and processed through cleaning, normalization, automatic labeling using the IndoBERT model, and conversion into JSONL format for fine-tuning on the Google Cloud Vertex AI platform. Evaluation results show an accuracy of 63.41% for perception classification and 67.31% for sentiment classification, with F1-scores of 28.82% and 28.75%, respectively. The model demonstrated better accuracy in identifying positive perceptions, while neutral or ambiguous reviews remained a challenge. Consistency analysis between predicted perceptions and user ratings showed a match rate of 83.81%. This study demonstrates that the fine-tuned Gemini 1.5 Flash is an effective solution for text-based perception classification and holds strong potential for broader application in user opinion analytics systems.