This study aims to examine the performance of artificial intelligence, specifically ChatGPT, in solving physics problems on electrical circuits by comparing essay and multiple-choice formats. The study employed a research and development approach combined with descriptive qualitative analysis to develop and validate equivalent test items with identical content and cognitive demands. The validated items were administered to ChatGPT, and its responses were analyzed based on accuracy and problem-solving processes. The results show that ChatGPT achieved higher accuracy in multiple-choice questions (100%) compared to essay questions (60%). Errors in essay responses were primarily conceptual and occurred in image-based and structurally complex circuit problems, particularly during the initial interpretation stage. These findings indicate that question format significantly influences AI performance, with multiple-choice questions promoting more structured reasoning while potentially overestimating AI capability. In conclusion, combining essay and multiple-choice formats provides a more comprehensive evaluation of AI problem-solving performance in physics education.
Copyrights © 2026