Journal of Computer Science and Engineering (JCSE)
Vol 6, No 2: August (2025)

Comparative Analysis of Parameter-Efficient-Fine-Tuning and Full Fine-Tuning Approaches for Indonesian Dialogue Summarization using mBART

Aji, Ananda Bayu (Unknown)
Purnamasari, Detty (Unknown)



Article Info

Publish Date
08 Aug 2025

Abstract

This study addresses the urgent need for efficient Indonesian dialogue summarization systems in remote working contexts by adapting the multilingual mBART-large-50 model. The DialogSum dataset was translated into Indonesian using Opus-MT, and two fine-tuning approaches—full fine-tuning and Parameter-Efficient Fine-Tuning (PEFT) with LoRA—were evaluated. Experiments on 1,500 test samples revealed that full fine-tuning achieved superior performance (ROUGE-1: 0.3726), while PEFT reduced energy consumption by 68.7% with a moderate accuracy trade-off (ROUGE-1: 0.2899). A Gradio-based interface demonstrated practical utility, enabling direct comparison of baseline, fine-tuned, and PEFT models. Critical findings include translation-induced terminology inconsistencies (e.g., "Hebes" vs. "Hebei") and context retention challenges in long dialogues. This work contributes a scalable framework for low-resource language NLP and provides actionable insights for optimizing computational efficiency in real-world applications.This study addresses the urgent need for efficient Indonesian dialogue summarization systems in remote working contexts by adapting the multilingual mBART-large-50 model. The DialogSum dataset was translated into Indonesian using Opus-MT, and two fine-tuning approaches, full fine-tuning and Parameter-Efficient Fine-Tuning (PEFT) with LoRA, were evaluated. Experiments on 1,500 test samples revealed that full fine-tuning achieved superior performance (ROUGE-1: 0.3726), while PEFT reduced energy consumption by 68.7% with a moderate accuracy trade-off (ROUGE-1: 0.2899). A Gradio-based interface demonstrated practical utility, enabling direct comparison of baseline, fine-tuned, and PEFT models. Critical findings include translation-induced terminology inconsistencies (e.g., "Hebes" vs. "Hebei") and context retention challenges in long dialogues. This work contributes a scalable framework for low-resource language NLP and provides actionable insights for optimizing computational efficiency in real-world applications.

Copyrights © 2025






Journal Info

Abbrev

JCSE

Publisher

Subject

Computer Science & IT

Description

Computer Architecture, Processor design, operating systems, high-performance computing, parallel processing, computer networks, embedded systems, theory of computation, design and analysis of algorithms, data structures and database systems, theory of computation, design and analysis of algorithms, ...