Makwana, Hemant
Unknown Affiliation

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Extracting geo-references from social media text using bi-long short term memory networks Mangal, Dharmendra; Makwana, Hemant
Indonesian Journal of Electrical Engineering and Computer Science Vol 35, No 2: August 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v35.i2.pp1263-1270

Abstract

The social media data provides great source of information about global and local events, with millions of users. More precisely, the fact that brief messages are practical and are highly popular. Many recent studies have been motivated to estimate the location of the events identified by tracking posts in social media text messages. It might be difficult to extract location data and estimate the location of an event while maintaining a sufficient level of situation awareness, particularly in disaster situations like fires or traffic accidents. In this presented work we proposed an approach to identify geo-references in the text messages. We used bi-directional long short term memory (LSTM) neural networks to extract location information in the text messages. The results show that applying Bi-LSTM on dataset gives high level accuracy after fine-tuning (up to 10 epochs). The testing results show that accuracy achieved is 0.98 and 0.076 loss value. This proves that the proposed methodology is better than the previous conditional random field (CRF)-based approaches.
Performance analysis of different BERT implementation for event burst detection from social media text Mangal, Dharmendra; Makwana, Hemant
Indonesian Journal of Electrical Engineering and Computer Science Vol 38, No 1: April 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v38.i1.pp439-446

Abstract

The language models play very important role in natural language processing (NLP) tasks. To understand natural languages, the learning models are required to be trained on large corpus. This requires a lot of time and computing resources. The detection of information like events, and locations from text is an important NLP task. As events detection is to be done in real-time so that immediate actions can be taken, hence we need efficient decision-making models. The pertained models like bi-directional encoders representation from transformers (BERT) gaining popularity to solve NLP problems. As BERT based models are pre-trained on large language corpus it requires very less time to adapt for domain specific NLP task. Different implementations of BERT have been proposed to enhance efficiency and applicability of the base model. The selection of right implementation is essential for overall performance of NLP based system. This work presents the comparative insights of five widely used BERT implementations named as BERT-base, BERT-large, Distill BERT, Robust BERT approach (RoBERTa-base) and RoBERT-large for event detection from the text extracted from social media streams. The results show that Distill-BERT model outperforms on basis of performance metric like precision, recall, and F1-score while the fastest to train also.