Claim Missing Document
Check
Articles

Found 1 Documents
Search
Journal : International Journal of Advanced Science and Computer Applications

Developing Semantic Textual Similarity for Guragigna Language Using Deep Learning Approach Degemu, Getnet
International Journal of Advanced Science and Computer Applications Vol. 5 No. 1 (2026): March 2026
Publisher : Utan Kayu Publishins

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47679/ijasca.v4i2.106

Abstract

Semantic Similarity is one of the highest levels of NLP. STS has significant advantages in NLP applications like information retrieval, information extraction, text summarization, data mining, machine translation, and other tasks. This research aims to present a deep learning approach for capturing semantic textual similarity (STS) in the Guragigna. The methodology involves collecting a Guragigna language corpus and preprocessing the text data and text representation is done using the Universal Sentence Encoder (USE), along with word embedding techniques including Word2Vec and GloVe and mean Square Error (MSE) is used to measure the performance. In the experimentation phase, models like LSTM, Bidirectional RNN, GRU, and Stacked RNN are trained and evaluated using different embedding techniques. The results demonstrate the efficacy of the developed models in capturing semantic textual similarity in the Guragigna language. Across different embedding techniques, including Word2Vec, GloVe, and USE, the Bidirectional RNN model with USE embedding achieves the lowest MSE of 0.0950 and the highest accuracy of 0.9244. GloVe and Word2Vec embedding also show competitive performance with slightly higher MSE and lower accuracy. The Universal Sentence Encoder consistently emerges as the top-performing embedding across all RNN architectures. The research results demonstrate the effectiveness of LSTM, GRU, Bi RNN, and Stacked RNN models in measuring semantic textual similarity in the Guragigna language.