Petir
Vol 14 No 2 (2021): PETIR (Jurnal Pengkajian Dan Penerapan Teknik Informatika)

Perbandingan Pre-trained Word Embedding dan Embedding Layer untuk Named-Entity Recognition Bahasa Indonesia

Meredita Susanty (Universitas Pertamina)
Sahrul Sukardi (Unknown)



Article Info

Publish Date
02 Sep 2021

Abstract

Named-Entity Recognition (NER) is used to extract information from text by identifying entities such as the name of the person, organization, location, time, and other entities. Recently, machine learning approaches, particularly deep-learning, are widely used to recognize patterns of entities in sentences. Embedding, a process to convert text data into a number or vector of numbers, translates high dimensional vectors into relatively low-dimensional space. Embeddings make it easier to do machine learning on large inputs like sparse vectors representing words. The embedding process can be performed using the supervised learning method, which requires a large number of labeled data sets or an unsupervised learning approach. This study compares the two embedding methods; trainable embedding layer (supervised learning) and pre-trained word embedding (unsupervised learning). The trainable embedding layer uses the embedding layer provided by the Keras library while pre-trained word embedding uses word2vec, GloVe, and fastText to build NER using the BiLSTM architecture. The results show that GloVe had better performance than other embedding techniques with a micro average f1 score of 76.48.

Copyrights © 2021






Journal Info

Abbrev

petir

Publisher

Subject

Chemical Engineering, Chemistry & Bioengineering Computer Science & IT Control & Systems Engineering Electrical & Electronics Engineering

Description

Journal Petir is a scientific journal published by STT-PLN Department of Information Engineering since 2007, as a media for disseminating research results, Library Study Technique, Observation Result, Surveying Survey, STT-PLN Department of Informatics Engineering and Supporting Science Development ...