AI and Machine Learning are crucial in advancing technology, especially for processing large, complex datasets. The transformer model, a primary approach in natural language processing (NLP), enables applications like translation, text summarization, and question-answer (QA) systems. This study compares two popular transformer models, FlanT5 and mT5, which are widely used yet often struggle to capture the specific context of the reference text. Using a unique Goddess Durga QA dataset with specialized cultural knowledge about Indonesia, this research tests how effectively each model can handle culturally specific QA tasks. The study involved data preparation, initial model training, ROUGE metric evaluation (ROUGE-1, ROUGE-2, ROUGE-L, and ROUGE-Lsum), and result analysis. Findings show that FlanT5 outperforms mT5 on multiple metrics, making it better at preserving cultural context. These results are impactful for NLP applications that rely on cultural insight, such as cultural preservation QA systems and context-based educational platforms.
                        
                        
                        
                        
                            
                                Copyrights © 2025