This study investigates the performance of FlanT5-based transformer models in handling Multiple-Question Answering (M-QA) tasks, in which multiple semantically related questions must be addressed with a single cohesive answer. Unlike traditional QA systems that focus on one-to-one question-answer pairs, the M-QA approach challenges the model to understand contextual relationships across several questions tied to the same topic. A custom dataset was developed with shared context, grouped questions, and a unified answer to train and evaluate the model. The FlanT5 architecture was fine-tuned using different learning rates (0.0001, 0.0002, 0.0003) to explore the effect of training configurations on model performance. The evaluation was conducted using the ROUGE-1, ROUGE-2, ROUGE-L, and ROUGE-Lsum metrics. The results indicate that a learning rate of 0.0003 provides the optimal performance, achieving a ROUGE-Lsum score of 0.7390. This study confirms the capability of instruction-tuned transformers to manage complex summarization scenarios that require contextual coherence. The findings are relevant for real-world applications such as intelligent digital assistants, clinical decision support, and educational chatbots. Furthermore, this study emphasizes the importance of hyperparameter tuning in improving the effectiveness of question-driven summarization systems for scalable and efficient deployment.
Copyrights © 2025