The increasing number of scientific publications each year poses challenges in managing and processing information. To address this, this study analyzes the application of the Bidirectional Encoder Representations from Transformers (BERT) model for extractive summarization of scientific articles. BERT, which can understand word context bidirectionally, was used to summarize scientific articles with a dataset of articles related to Natural Language Processing (NLP). Evaluation was performed using ROUGE metrics, which measure the similarity between the model-generated summary and the reference summary. The evaluation results showed that the BERT model produced relevant and accurate summaries, with a precision of 1.00, recall of 0.49, and F1-score of 0.66 on the second article, and precision of 1.00, recall of 0.79, and F1-score of 0.88 on the first article. While the results are promising, challenges remain in the model's ability to capture writing style variations and summarize complex texts. This study demonstrates that BERT is effective for generating coherent automatic summaries, with potential for further development to enhance abstractive summarization quality.
Copyrights © 2026