Salsabila, Imtitsal Ulya
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

An Evaluation of Self-Attentive Sequential Recommendation (SASRec) Algorithm Using Hyperparameter Tuning Wibowo, Agung Toto; Hasmawati, Hasmawati; Nurrahmi, Hani; Salsabila, Imtitsal Ulya
Jurnal Teknik Informatika (Jutif) Vol. 7 No. 2 (2026): JUTIF Volume 7, Number 2, April 2026
Publisher : Informatika, Universitas Jenderal Soedirman

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52436/1.jutif.2026.7.2.5158

Abstract

Sequential recommendation is a branch of Recommender Systems that aims to predict the next item a user will interact with based on their historical sequence of interactions. The main challenge in SR is to capture both short-term and long-term dependencies among items within a sequence. Self-Attentive Sequential Recommendation (SASRec) is a self-attention-based deep learning model designed to recognize sequential interaction patterns. Despite its effectiveness, the performance of SASRec is highly dependent on hyperparameter configurations, yet comprehensive evaluations remain limited. This research aims to evaluate the influence of SASRec's configuration through hyperparameter tuning on sequential recommendation performance. The hyperparameters used are hidden_size, inner_size, number of attention heads (num_heads), and number of layers (num_layers). The evaluation was conducted on two public datasets with different sparsity characteristics: MovieLens-1M (Sparsity ≈ 95.80%) and Amazon Musical Instruments (Sparsity ≈ 99.99%). In this study, Recall@k and MRR@k were used as performance metrics. The test results showed that hidden_size and inner_size had a significant positive impact on performance, especially on the dense dataset. The optimal hidden_size was obtained at hidden_size = 64 on the Amazon Musical Instrument dataset, and at hidden_size = 256 on the Movielens 1M dataset. The optimal inner_size was obtained at inner_size = 256 on both datasets. Meanwhile, the num_heads and num_layers hyperparameters did not provide a significant performance improvement. Furthermore, in the comparison between SASRec, GRU4Rec, and BERT4Rec, SASRec outperforms GRU4Rec and BERT4Rec in handling highly sparse datasets such as Amazon Musical Instruments obtained average recall@20 = 0.0678, and average MRR@20 = 0.0223.