This study aims to analyse the effectiveness of two prompt styles, namely guided prompt and free prompt, in influencing the quality of answers generated by a Retrieval-Augmented Generation (RAG)-based Large Language Model (LLM) system using the META-Llama 3 model. The system is designed to answer questions based on reference documents stored in vector form through an embedding process. The research was conducted using questions formed in two versions of the prompt style, and the answer results were evaluated using two metrics ROUGE and BERTScore. The results showed that guided prompts resulted in higher scores on ROUGE-1, ROUGE-2, and ROUGE-L metrics reflecting a better level of precision and lexical agreement. Meanwhile, the BERTScore between the two prompt styles did not show any significant difference, meaning that in terms of meaning or semantic similarity, they provided relatively equivalent results. These findings suggest that prompt design has a real impact on the structure and precision of answers.
Copyrights © 2025