Aji, Ananda Bayu
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

Comparative Analysis of Parameter-Efficient-Fine-Tuning and Full Fine-Tuning Approaches for Indonesian Dialogue Summarization using mBART Aji, Ananda Bayu; Purnamasari, Detty
Journal of Computer Science and Engineering (JCSE) Vol 6, No 2: August (2025)
Publisher : ICSE (Institute of Computer Sciences and Engineering)

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

This study addresses the urgent need for efficient Indonesian dialogue summarization systems in remote working contexts by adapting the multilingual mBART-large-50 model. The DialogSum dataset was translated into Indonesian using Opus-MT, and two fine-tuning approaches—full fine-tuning and Parameter-Efficient Fine-Tuning (PEFT) with LoRA—were evaluated. Experiments on 1,500 test samples revealed that full fine-tuning achieved superior performance (ROUGE-1: 0.3726), while PEFT reduced energy consumption by 68.7% with a moderate accuracy trade-off (ROUGE-1: 0.2899). A Gradio-based interface demonstrated practical utility, enabling direct comparison of baseline, fine-tuned, and PEFT models. Critical findings include translation-induced terminology inconsistencies (e.g., "Hebes" vs. "Hebei") and context retention challenges in long dialogues. This work contributes a scalable framework for low-resource language NLP and provides actionable insights for optimizing computational efficiency in real-world applications.This study addresses the urgent need for efficient Indonesian dialogue summarization systems in remote working contexts by adapting the multilingual mBART-large-50 model. The DialogSum dataset was translated into Indonesian using Opus-MT, and two fine-tuning approaches, full fine-tuning and Parameter-Efficient Fine-Tuning (PEFT) with LoRA, were evaluated. Experiments on 1,500 test samples revealed that full fine-tuning achieved superior performance (ROUGE-1: 0.3726), while PEFT reduced energy consumption by 68.7% with a moderate accuracy trade-off (ROUGE-1: 0.2899). A Gradio-based interface demonstrated practical utility, enabling direct comparison of baseline, fine-tuned, and PEFT models. Critical findings include translation-induced terminology inconsistencies (e.g., "Hebes" vs. "Hebei") and context retention challenges in long dialogues. This work contributes a scalable framework for low-resource language NLP and provides actionable insights for optimizing computational efficiency in real-world applications.