Vora, Deepali
Unknown Affiliation

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Towards efficient knowledge extraction: Natural language processing-based summarization of research paper introductions Chaudhari, Nikita; Vora, Deepali; Kadam, Payal; Khairnar, Vaishali; Patil, Shruti; Kotecha, Ketan
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 14, No 1: February 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v14.i1.pp680-691

Abstract

Academic and research papers serve as valuable platforms for disseminating expertise and discoveries to diverse audiences. The growing volume of academic papers, with nearly 7 million new publications annually, presents a formidable challenge for students and researchers alike. Consequently, the development of research paper summarization tools has become crucial to distilling crucial insights efficiently. This study examines the effectiveness of pre-trained models like text-to-text transfer transformer (T5), bidirectional encoder representations from transformers (BERT), bidirectional and auto-regressive transformer (BART), and pre-training with extracted gap-sentences for abstractive summarization (PEGASUS) on research papers, introducing a novel hybrid model merging extractive and abstractive techniques. Comparative analysis of summaries, recall-oriented understudy for gisting evaluation (ROUGE) and bilingual evaluation understudy (BLEU) score evaluations and author evaluation help evaluate the quality and accuracy of the generated summaries. This advancement contributes to enhancing the accessibility and efficiency of assimilating complex academic content, emphasizing the importance of advanced summarization tools in promoting the accessibility of academic knowledge.
EmoVibe: AI-driven multimodal emotion analysis for mental health via social media dashboards Vora, Deepali; Sharma, Aryan; Garg, Mudit; Fransis, Steve
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 14, No 6: December 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v14.i6.pp4565-4578

Abstract

Monitoring mental health via social media often utilizes unimodal approaches, such as sentiment analysis on text or single-staged image categorization, or executes early feature fusion. However, in real-world contexts where emotions are conveyed via text, emojis, and images, unimodal approach leads to obscured decision-making pathways and overall diminished performance. To overcome these limitations, we propose EmoVibe, a hybrid multimodal AI framework for emotive analysis. EmoVibe uses attention-based late fusion strategy, where text embeddings are generated from bidirectional encoder representations from transformers (BERT) and visual features are extracted by vision transformer. Subsequently, emoticon vectors linked to avatars are processed independently. Later, these independent data features are integrated at higher levels, enhancing interpretability and performance. In contrast to early fusion methods and integrated multimodal large language models (LLMs) like CLIP, Flamingo, GPT-4V, MentaLLaMA, and domain-adapted models like EmoBERTa, EmoVibe preserves modality-specific contexts without premature fusion. This architecture saves processing cost, allowing for clearer, unambiguous rationalization and explanations. EmoVibe outperforms unimodal baselines and early fusion models, obtaining 89.7% accuracy on GoEmotions, FER, and AffectNet, compared to BERT's 87.4% and ResNet-50's 84.2%, respectively. Furthermore, a customizable, real time, privacy-aware dashboard is created, supporting physicians and end users. This technology enables scalable and proactive intervention options and fosters user self-awareness of mental health.