Artificial intelligence based programs are used in various fields of daily life, often without our awareness. With the increasing integration of artificial intelligence applications into the educational system, accessing information has become faster. As a result, chat programs that produce text-based answers similar to those of humans are being used as educational tools. The accuracy of the content generated by these programs has always been a topic of interest. In our study, we evaluated the success of ChatGPT, Gemini, and Copilot applications in answering dental specialty exam anatomy questions from 2012-2021. In the computer environment, free versions of ChatGpt-4, Google Gemini and Microsoft Copilot were accessed. The responses were recorded as either correct or incorrect. Out of 74 anatomy questions ChatGPT, Gemini and Copilot gave 2, 10, and 1 incorrect answers, respectively. Although the evaluated programs showed sufficient success in answering anatomy questions, their use was deemed limited due to errors in the supplementary information they provided.
Copyrights © 2024