Item Analysis is used to determine the quality of test items, whether applicable or not applicable for the test takers’ ability assessment. Owing to that, our research attempts to measure the quality of personal fabricated English items for 8th grade students under the Classical Test Theory (CTT) and Item Response Theory (IRT) by Rasch models. We adopted reliability, item difficulty, discrimination power, and distractor effectivity, following to both theories. Overall, 30 items with multiple-choice format were handed out to 46 students. The items were analyzed quantitatively by deploying the Quest.exe application. The results showed that the items are reliable with 0.69 CTT and 1.0 IRT values, and the item difficulties are also varied: 12, 14, and 4 based on CTT categorizations and index easy, moderate, and difficult, while IRT demonstrated similar results. There is only 1 item inadequate to differentiate students’ ability, and this item required a revision; furthermore, 17 out of 30 items have effective distractors. This research is expected to contribute to Item analysis and Quest.exe demonstration for the same purposes.
Copyrights © 2024