Discourses asserting peer assessment on English learning process particularly regarding its either effectiveness or downfalls as an alternative evaluation method for English as a Foreign Language (EFL) learners have been disseminated. However, doubts on the lack of peer assessment’s capacity as an evaluating tool remains to need more validation by a study examining its reliability in a wider learning context to ensure if the method could be as reliable as teacher’s grading leading to a theory that peer assessment can serve for reducing teacher’s load especially for big classes. In that connection, this study aims to examine the reliability of peer assessment for a big class of first-year non-native English speaking university students majoring in software engineering but already passing English grammar and vocabulary for composing short text genres in their earlier semester. Methods used for collecting and analyzing the data were Wilcoxon reliability and Bivariate Pearson Correlation tests to compare students’ peer assessment and lecturer grading on narrative texts written by 56 software engineering students. The finding shows peer assessment as a tool for evaluating students’ writing quality has been in low reliability indicated from the incompatibility between the students’ peer assessment quality and the lecturer’s grading result. This study contributes to present evidence that peer assessment should be out of consideration as an instrument for evaluating the writing produced by non-native English speaking students despite their passing subjects expected to have enabled them to compose a narrative writing. The conclusion is peer assessment is weak in effect on relieving teacher’s assessment load in a big writing class of English for Foreign Learning (EFL) students in spite of their English grammar and vocabulary acquisition at a certain level, though the method might serve for giving non-grading related advantages such as promoting students’ metacognition.
Copyrights © 2025