Clinicians, researchers, and healthcare professionals are confronted with the challenge of efficiently extracting relevant knowledge from vast amounts of textual data. Medical text summarization emerges as a crucial tool to address this challenge by condensing lengthy medical documents into concise, informative summaries. A comprehensive hybrid approach is proposed to address the challenges in medical text summarization by combining both extractive and abstractive methods, by integrating Term Frequency-Inverse Document Frequency (TF-IDF) of Natural Language Processing (NLP) and AutoModelForSeq2SeqLM of Large Language Model. The performance this proposed approach is compared with existing methods such as Bidirectional Encoder Representations from Transformers (BERT), Text Rank, K-means, face book BART-Large-CNN, GPT2 using ROUGE-1, ROUGE-2 and ROUGE-L metrics. The experimental results show that hybrid approach is outperforming other existing methods. Medical text summarization helps extract important information from large medical documents. This work combines two methods, TF-IDF and AutoModelForSeq2SeqLM, to create better summaries, performing better than existing techniques like BERT and GPT-2 based on ROUGE scores.
Copyrights © 2025