This reseach aims to evaluate the responsiveness and accuracy of two natural language processing systems, namely ChatGPT and Google BARD, in answering questions related to the Python programming language. The evaluation is conducted using the Bleu Score metric as an indicator of the accuracy of answers generated by both systems. This research involves experiments with various Python-related questions to measure the level of alignment with expected reference answers. The results indicate that the average Bleu Score for ChatGPT is 0.0088, while the average Bleu Score for Google BARD is 0.0073. Additionally, the response time for ChatGPT is recorded at 12.05 seconds, whereas Google BARD has a response time of 18.38 seconds. Although there is a small difference in accuracy, ChatGPT shows a slightly higher Bleu Score and faster response time compared to Google BARD. The conclusion of this research states that, in the context of answering questions related to the Python programming language, ChatGPT performs slightly better than Google BARD, measured in terms of answer accuracy and response time.
                        
                        
                        
                        
                            
                                Copyrights © 2024