Language, as the fundamental means of communication, represents the symbolization of thoughts conveyed to others. Human understanding of the structure of messages relies on the speaker's language reasoning ability. This ability can be measured from the simplest to the most complex stimuli. Traditionally, assessing language reasoning has been done through written tests, which require extensive preparation and are time-consuming. This study proposes a model for measuring language reasoning ability using a Computerized Adaptive Test (CAT). The CAT adjusts the difficulty of questions in real time based on the participant's responses. If a participant answers correctly, the system presents a more challenging question. Conversely, the system selects an easier question if the participant answers incorrectly. This adaptive approach ensures a tailored and efficient assessment experience, accurately measuring the participant's abilities. The research began by developing a valid and reliable language reasoning test instrument and its quadrant class. This included determining the starting, jumping, and stopping points, culminating in the CAT design. The results of the CAT proposed in this study can map basic language reasoning skills, starting from understanding the concept of facts, applying linguistic rules according to the agreement formed in the read clauses, breaking down information into more specific forms, judging the values of ideas, combining word selection, ideas formation, and context, analogy thinking, and comparative thinking. The analysis revealed that the participants' dominant ability was in comparative thinking, which involves comparing language forms, conditions, settings, and messages in written discourse. Moreover, the CAT system proposed in this study was proven to speed up the testing process while enabling students to complete the tests according to their abilities.
Copyrights © 2024