The rapid growth of digital content has created significant challenges in information processing, particularly in languages like Indonesian, where automatic summarization remains complex. This study evaluates the performance of different T5 (Text-to-Text Transfer Transformer) model variants in generating abstractive summaries for Indonesian texts. The research aims to identify the most effective model variant for Indonesian language summarization by comparing T5-Base, FLAN-T5 Base, and mT5-Base models. Using the INDOSUM dataset containing 19,000 Indonesian news article-summary pairs, we implemented a 5-Fold Cross-Validation approach and applied ROUGE metrics for evaluation. Results show that T5-Base achieves the highest ROUGE-1, ROUGE-2, and ROUGE-L scores of 73.52%, 64.50%, and 69.55%, respectively, followed by FLAN-T5, while mT5-Base performs the worst. However, qualitative analysis reveals various summarization errors: T5-Base exhibits redundancy and inconsistent formatting, FLAN-T5 suffers from truncation issues, and mT5 often generates factually incorrect summaries due to misinterpretation of context. Additionally, we assessed computational performance through training time, inference speed, and resource consumption. The results indicate that mT5-Base has the shortest training time and fastest inference speed but at the cost of lower summarization accuracy. Conversely, T5-Base, while achieving the highest accuracy, requires significantly longer training time and greater computational resources. These findings highlight the trade-offs between accuracy, error tendencies, and computational efficiency, providing valuable insights for developing more effective Indonesian language summarization systems and emphasizing the importance of model selection for specific language tasks.
Copyrights © 2025