Rapid developments in information technology, such as chatbots and generative artificial intelligence, have drastically lowered the cost of providing services to the society. This study aims to measure performance of developed chatbot using retrieval augmented generation and vector database. This research compares the performance of existing Large Language Modelling (LLM) in answering questions related to regulations concerning public service agencies.. Using a vector database, questions are assessed and answered by the LLM model, considering cosine similarity scores. The best-performing model, gpt-4, is selected for the deployment process which have average cosine similarity score 0,404. The use of LLM for chatbot creation at the prototyping stage can provide a good response to the question asked related to public service agencies with retrieval augmented generation (RAG) process through regulation-based document extraction.
Copyrights © 2024