Claim Missing Document
Check
Articles

Found 2 Documents
Search

Stock Prediction Based on Twitter Sentiment Extraction Using BiLSTM-Attention Dhomas Hatta Fudholi; Royan Abida N. Nayoan; Septia Rani
Indonesian Journal of Electrical Engineering and Informatics (IJEEI) Vol 10, No 1: March 2022
Publisher : IAES Indonesian Section

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52549/ijeei.v10i1.3011

Abstract

A profitable stock price prediction will yield a large profit. According to behavioural economics, other people's emotions and viewpoints have a significant impact on business. One of them is the rise and fall of stock prices. Previous studies have shown that public sentiments retrieved from online information can be very valuable on market trading. In this paper, we propose a model that works well in predicting future stock prices by using public sentiments from social media. The online information used in this research is financial tweets collected from Twitter and the stock prices values retrieved from Yahoo! Finance. We collected tweets related to Netflix Company stocks and the stock prices for the same period which is 5 years from 2015 to 2020 as the dataset. We extracted the sentiment value using VADER algorithm. In this paper, we apply a Bidirectional Long Short-Term Memory (BiLSTM) architecture to achieve our goal. Moreover, we created seven different experiments with different stock price parameters and selected sentiment values combinations and investigated the model by adding an attention layer. We experimented with two different sentiment values, tweet’s compound value and tweet’s compound value multiplied by favorites count. We considered the favorites count as one representation of public sentiments. From the seven experiments, the experiment with Bidirectional Long Short-Term Memory (BiLSTM) - attention model combined with our selected stock price parameters namely close price, open price, and using Twitter sentiment values that are multiplied with the tweet’s favorites count yields a better RMSE result of 2.482e-02 in train set and 2.981e-02 in the test set.
A Study on Visual Understanding Image Captioning using Different Word Embeddings and CNN-Based Feature Extractions Dhomas Hatta Fudholi; Annisa Zahra; Royan Abida N. Nayoan
Kinetik: Game Technology, Information System, Computer Network, Computing, Electronics, and Control Vol. 7, No. 1, February 2022
Publisher : Universitas Muhammadiyah Malang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.22219/kinetik.v7i1.1394

Abstract

Image captioning is a task that can provide a description of an image in natural language. Image captioning can be used for a variety of applications, such as image indexing and virtual assistants. In this research, we compared the performance of three different word embeddings, namely, GloVe, Word2Vec, FastText and six CNN-based feature extraction architectures such as, Inception V3, InceptionResNet V2, ResNet152 V2, EfficientNet B3 V1, EfficientNet B7 V1, and NASNetLarge which then will be combined with LSTM as the decoder to perform image captioning. We used ten different household objects (bed, cell phone, chair, couch, oven, potted plant, refrigerator, sink, table, and tv) that were obtained from MSCOCO dataset to develop the model. Then, we created five new captions in Bahasa Indonesia for the selected images. The captions might contain details about the name, the location, the color, the size, and the characteristics of an object and its surrounding area. In our 18 experimental models, we used different combination of the word embedding and CNN-based feature extraction architecture, along with LSTM to train the model. As the result, models that used the combination of Word2Vec + NASNetLarge performed better in generating Indonesian captions than the other models based on BLEU-4 metric.