Vijay Rajpurohit
Visvesvaraya Technological University

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Transformer based multi-head attention network for aspect-based sentiment classification Abhinandan Shirahatti; Vijay Rajpurohit; Sanjeev Sannakki
Indonesian Journal of Electrical Engineering and Computer Science Vol 26, No 1: April 2022
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v26.i1.pp472-481

Abstract

Aspect-based sentiment classification is vital in helping manufacturers identify the pros and cons of their products and features. In the latest days, there has been a tremendous surge of interest in aspect-based sentiment classification (ABSC). Since it predicts an aspect term sentiment polarity in a sentence rather than the whole sentence. Most of the existing methods have used recurrent neural networks and attention mechanisms which fail to capture global dependencies of the input sequence and it leads to some information loss and some of the existing methods used sequence models for this task, but training these models is a bit tedious. Here, we propose the multi-head attention transformation (MHAT) network the MHAT utilizes a transformer encoder in order to minimize training time for ABSC tasks. First, we used a pre-trained Global vectors for word representation (GloVe) for word and aspect term embeddings. Second, part-of-speech (POS) features are fused with MHAT to extract grammatical aspects of an input sentence. Whereas most of the existing methods have neglected this. Using the SemEval 2014 dataset, the proposed model consistently outperforms the state-of-the-art methods on aspect-based sentiment classification tasks.
Fine grained irony classification through transfer learning approach Abhinandan Shirahatti; Vijay Rajpurohit; Sanjeev Sannakki
Computer Science and Information Technologies Vol 4, No 1: March 2023
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/csit.v4i1.p43-49

Abstract

Nowadays irony appears to be pervasive in all social media discussion forums and chats, offering further obstacles to sentiment analysis efforts. The aim of the present research work is to detect irony and its types in English tweets We employed a new system for irony detection in English tweets, and we propose a distilled bidirectional encoder representations from transformers (DistilBERT) light transformer model based on the bidirectional encoder representations from transformers (BERT) architecture, this is further strengthened by the use and design of bidirectional long-short term memory (Bi-LSTM) network this configuration minimizes data preprocessing tasks proposed model tests on a SemEval-2018 task 3, 3,834 samples were provided. Experiment results show the proposed system has achieved a precision of 81% for not irony class and 66% for irony class, recall of 77% for not irony and 72% for irony, and F1 score of 79% for not irony and 69% for irony class since researchers have come up with a binary classification model, in this study we have extended our work for multiclass classification of irony. It is significant and will serve as a foundation for future research on different types of irony in tweets.