One of the main challenges in the field of Natural Language Processing (NLP) is developing systems for automatic text summarization. These systems typically fall into two categories: extractive and abstractive. Extractive techniques generate summaries by selecting important sentences or phrases directly from the original text, whereas abstractive techniques focus on rephrasing or paraphrasing the content, producing summaries that resemble human-written ones. In this research, models based on Transformer architectures, including BERT and T5, were used, which have been shown to effectively summarize texts in various languages, including Indonesian. The dataset used was INDOSUM, consisting of Indonesian news articles. The best results were achieved with the T5 model, using the abstractive approach, recorded ROUGE-1, ROUGE-2, and ROUGE-L scores of 69.36%, 61.27%, and 66.17%, respectively. On the other hand, the extractive BERT model achieved ROUGE-1, ROUGE-2, and ROUGE-L scores of 70.82%, 63.99%, and 58.40%.
                        
                        
                        
                        
                            
                                Copyrights © 2025