Multiple-choice tests are commonly used in education, including at the higher education level, as an efficient method for evaluating students' understanding. In the context of Arabic language learning, multiple-choice tests are believed to be able to assess students' level of comprehension and mastery of the language. However, it is essential to evaluate the feasibility and reliability of these tests to ensure that the results accurately reflect students' abilities. This study aims to evaluate the feasibility and reliability of multiple-choice tests in Arabic in higher education. The evaluation was conducted by assessing content validity, construct validity, reliability, and correlation with students' academic performance. The results showed that 60% of the test questions were aligned with the existing curriculum, although only 14 out of 25 questions met the criteria for construct validity. Despite some shortcomings in construct validity, the test demonstrated a high level of reliability, with a Cronbach's Alpha value of 0.88, indicating consistent test results. Additionally, there was a significant positive correlation between test scores and students' academic performance (r = 0.44), indicating that the test can reflect students' overall academic achievement. Despite certain limitations in construct validity, the conclusion of this study is that the multiple-choice test is still considered reliable as an evaluation tool. This conclusion provides insight into the test's effectiveness in measuring students' understanding and mastery of Arabic at the tertiary level.
Copyrights © 2024