Claim Missing Document
Check
Articles

Found 2 Documents
Search
Journal : JOIV : International Journal on Informatics Visualization

Few-Shot-BERT-RNN Narrative Structure Analysis for Andersen's Stories Daniati, Erna; Wibawa, Aji Prasetya; Irianto, Wahyu Sakti Gunawan; Hernandez, Leonel
JOIV : International Journal on Informatics Visualization Vol 9, No 4 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.4.3932

Abstract

Event Extraction (EE) is a pivotal task for NLP, where important events in the narrative text need to be detected and recognized. We present an alternative method for extracting events from Hans Christian Andersen's fairy tales, utilizing Few-Shot Learning with BERT (Bidirectional Encoder Representations from Transformers) and RNN (Recurrent Neural Network) in this paper. We selected Andersen's fairy tales because they are characterized by rich narratives and symbolic language, which also often prevents automatic event extraction. To reduce reliance on labeled samples, we utilize the Few-Shot Learning method, which enables the model to learn from a small number of labeled event examples trivially. The BERT model is used to generate deep representations by modeling the context between words and sentences. RNN is essential to capture the sequence of events in the story, which determines the structure of the narrative. The findings demonstrate that the proposed framework significantly improves event extraction, with high values of evaluation metrics such as in accuracy, precision, recall, and F1-score. The proposed method is also effective in extracting non-explicit events while keeping the narrative context. Despite the challenges posed by metaphorical language and subjective events, this work demonstrates that Few-Shot Learning, BERT, and RNNs offer a promising solution to the task of event extraction from complex narratives.
Neural Machine Translation of Spanish-English Food Recipes Using LSTM Dedes, Khen; Putra Utama, Agung Bella; Wibawa, Aji Prasetya; Afandi, Arif Nur; Handayani, Anik Nur; Hernandez, Leonel
JOIV : International Journal on Informatics Visualization Vol 6, No 2 (2022)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.2.804

Abstract

Nowadays, food is one of the things that has been globalized, and everyone from different parts of the world has been able to cook food from other countries through existing online recipes. Based on that, this study developed a translation formula using a neural machine translation (NMT). NMT is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder–decoders. Our experiment led to novel insights and practical advice for building and extending NMT with the applied long short-term memory (LSTM) method to 47 bilingual food recipes between Spanish-English and English-Spanish. LSTM is one of the best machine learning methods for translating languages because it can retain memories for an extended period concurrently, grasp complicated connections between data, and provides highly useful information in deciding translation outcomes. The evaluation for this neural machine translation is to use BLEU. The comparing results show that the translation of recipes from Spanish-English has a better BLEU value of 0.998426 than English-Spanish with a data-sharing of 70%:30% during epoch 1000. Researchers can convert the country's popular cuisine recipes into another language for further research, allowing it to become more widely recognized abroad.