Claim Missing Document
Check
Articles

Found 1 Documents
Search
Journal : International Journal of Electrical and Computer Engineering

Named entity recognition on Indonesian legal documents: a dataset and study using transformer-based models Yulianti, Evi; Bhary, Naradhipa; Abdurrohman, Jafar; Dwitilas, Fariz Wahyuzan; Nuranti, Eka Qadri; Husin, Husna Sarirah
International Journal of Electrical and Computer Engineering (IJECE) Vol 14, No 5: October 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v14i5.pp5489-5501

Abstract

The large volume of court decision documents in Indonesia poses a challenge for researchers to assist legal practitioners in extracting useful information from the documents. This information can also benefit the general public by improving legal transparency, law enforcement, and people's understanding of the law implementation in Indonesia. A natural language processing task that extracts important information from a document is called named entity recognition (NER). In this study, the NER task is applied to legal domains, which is then referred to as legal entity recognition (LER) task. In this task, some important legal entities, such as judges, prosecutors, and advocates, are extracted from the decision documents. A new Indonesian LER dataset is built, called IndoLER data, consisting of approximately 1K decision documents with 20 types of fine-grained legal entities. Then, the transformer-based models, such as multilingual bidirectional encoder representations from transformers (BERT) or M-BERT, Indonesian BERT or IndoBERT, Indonesian robustly optimized BERT pretraining approach (RoBERTa) or IndoRoBERTa, XLM (cross lingual language model)-RoBERTa or XLMR, are proposed to solve the Indonesian LER task using this dataset. Our experimental results show that the RoBERTa-based models, such as XLM-R and IndoRoBERTa, can outperform the state-of-the-art deep-learning baselines using BiLSTM (bidirectional long short-term memory) and BiLSTM-conditional random field (BiLSTM-CRF) approaches by 7.2% to 7.9% and 2.1% to 2.6%, respectively. XLM-RoBERTa is shown to be the best-performing model, achieving the F1-score of 0.9295.