Anuradha, Surabhi
Unknown Affiliation

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

RecommendRift: a leap forward in user experience with transfer learning on netflix recommendations Anuradha, Surabhi; Jyothi, Pothabathula Naga; Sivakumar, Surabhi; Sheshikala, Martha
Indonesian Journal of Electrical Engineering and Computer Science Vol 36, No 2: November 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v36.i2.pp1218-1225

Abstract

In today’s fast-paced lifestyle, streaming movies and series on platforms like  Netflix is a valued recreational activity. However, users often spend considerable time searching for the right content and receive irrelevant recommendations, particularly when facing the “cold start problem” for new users. This challenge arises from existing recommender systems relying on factors like casting, title, and genre, using term frequency-inverse document frequency (TF-IDF) for vectorization, which prioritizes word frequency over semantic meaning. To address this, an innovative recommender system considering not only casting, title, and genre but also the short description of movies or shows is proposed in this study. Leveraging Word2Vec embedding for semantic relationships, this system offers recommendations aligning better with user preferences. Evaluation metrics including precision, mean average precision (MAP), discounted cumulative gain (DCG), and ideal cumulative gain (IDCG) demonstrate the system’s effectiveness, achieving a normalized DCG (NDCG)@10 of 0.956. A/B testing shows an improved click-through rate (CTR) of recommendations, showcasing enhanced streaming experience.
Investigating the recall efficiency in abstractive summarization: an experimental based comparative study Anuradha, Surabhi; Sheshikala, Martha
Indonesian Journal of Electrical Engineering and Computer Science Vol 39, No 1: July 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v39.i1.pp446-454

Abstract

This study explores text summarization, a critical component of natural language processing (NLP), specifically targeting scientific documents. Traditional extractive summarization, which relies on the original wording, often results in disjointed sequences of sentences and fails to convey key ideas concisely. To address these issues and ensure comprehensive inclusion of relevant details, our research aims to improve the coherence and completeness of summaries. We employed 25 different large language models (LLMs) to evaluate their performance in generating abstractive summaries of scholarly scientific documents. A recall-oriented evaluation of the generated summaries revealed that LLMs such as 'Claude v2.1,' 'PPLX 70B Online,' and 'Mistral 7B Instruct' demonstrated exceptional performance with ROUGE-1 scores of 0.92, 0.88, and 0.85, respectively, supported by high precision and recall values from bidirectional encoder representations from transformers (BERT) scores (0.902, 0.894, and 0.888). These findings offer valuable insights for NLP researchers, laying the foundation for future advancements in LLMs for summarization. The study highlights potential improvements in text summarization techniques, benefiting various NLP applications.