Research highlights a mismatch between the general scoring rubrics teachers use and the specific features required to assess genre-based writing, potentially due to a lack of genre-based approach-specific models. The study aims to develop and validate genre-based assessment instruments, and it sought to evaluate the validity and reliability of these instruments to ensure their suitability for classroom-based assessment. This study involved four stages of the research and development (R&D) model: initial product development, refinement, field-testing, and revision. Data were collected through focus group discussions (FGD) with a fellow teacher on the test blueprint and rubric, a questionnaire on participants’ perceptions of test quality, and field-testing of 40 students’ writing scores. SPSS was used to analyze inter-rater reliability, and participants deemed the instruments suitable for classroom use. The results show that genre-based assessment through the tasks is effectively aligned with instructional objectives with assessment practices. It revealed a strong positive correlation, confirming the reliability of the scoring rubric. Thus, the instruments are both valid and reliable for assessing students’ narrative writing performance. The findings emphasize the potential of genre-based assessment tools to align instructional goals with assessment practices, offering implications for enhancing the teaching and assessment of genre-specific writing in educational contexts.
Copyrights © 2025